Jan 31 14:54:39 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 14:54:39 crc restorecon[4710]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:39 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:54:40 crc restorecon[4710]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 14:54:40 crc kubenswrapper[4763]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.780478 4763 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786762 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786799 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786811 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786822 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786832 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786842 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786853 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786865 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786874 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786883 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786921 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786930 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786941 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786953 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786962 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786971 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786980 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786989 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.786998 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787007 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787016 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787024 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787033 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787041 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787051 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787060 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787070 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787081 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787089 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787098 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787105 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787113 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787121 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787129 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787136 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787144 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787152 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787160 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787169 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787177 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787184 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787194 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787202 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787210 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787218 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787225 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787245 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787254 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787264 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787272 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787280 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787288 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787298 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787306 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787315 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787323 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787332 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787342 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787351 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787387 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787396 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787405 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787412 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787420 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787428 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787438 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787446 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787453 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787461 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787469 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.787476 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790509 4763 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790538 4763 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790565 4763 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790578 4763 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790591 4763 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790600 4763 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790612 4763 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790623 4763 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790633 4763 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790642 4763 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790651 4763 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790663 4763 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790672 4763 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790682 4763 flags.go:64] FLAG: --cgroup-root="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790691 4763 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790730 4763 flags.go:64] FLAG: --client-ca-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790739 4763 flags.go:64] FLAG: --cloud-config="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790748 4763 flags.go:64] FLAG: --cloud-provider="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790757 4763 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790783 4763 flags.go:64] FLAG: --cluster-domain="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790794 4763 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790808 4763 flags.go:64] FLAG: --config-dir="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790819 4763 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790832 4763 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790848 4763 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790858 4763 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790867 4763 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790877 4763 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790886 4763 flags.go:64] FLAG: --contention-profiling="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790896 4763 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790906 4763 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790915 4763 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790924 4763 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790938 4763 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790948 4763 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790957 4763 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790967 4763 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790976 4763 flags.go:64] FLAG: --enable-server="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.790986 4763 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791000 4763 flags.go:64] FLAG: --event-burst="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791010 4763 flags.go:64] FLAG: --event-qps="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791020 4763 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791029 4763 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791038 4763 flags.go:64] FLAG: --eviction-hard="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791049 4763 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791059 4763 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791068 4763 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791079 4763 flags.go:64] FLAG: --eviction-soft="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791088 4763 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791097 4763 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791106 4763 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791115 4763 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791124 4763 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791133 4763 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791142 4763 flags.go:64] FLAG: --feature-gates="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791153 4763 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791170 4763 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791179 4763 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791189 4763 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791198 4763 flags.go:64] FLAG: --healthz-port="10248" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791207 4763 flags.go:64] FLAG: --help="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791216 4763 flags.go:64] FLAG: --hostname-override="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791225 4763 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791234 4763 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791244 4763 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791253 4763 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791262 4763 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791271 4763 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791280 4763 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791289 4763 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791298 4763 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791308 4763 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791318 4763 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791327 4763 flags.go:64] FLAG: --kube-reserved="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791336 4763 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791344 4763 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791356 4763 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791406 4763 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791419 4763 flags.go:64] FLAG: --lock-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791431 4763 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791442 4763 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791455 4763 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791472 4763 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791486 4763 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791497 4763 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791509 4763 flags.go:64] FLAG: --logging-format="text" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791535 4763 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791546 4763 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791559 4763 flags.go:64] FLAG: --manifest-url="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791568 4763 flags.go:64] FLAG: --manifest-url-header="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791580 4763 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791590 4763 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791601 4763 flags.go:64] FLAG: --max-pods="110" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791611 4763 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791623 4763 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791634 4763 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791647 4763 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791659 4763 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791670 4763 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791682 4763 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791738 4763 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791749 4763 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791761 4763 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791772 4763 flags.go:64] FLAG: --pod-cidr="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791784 4763 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791802 4763 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791813 4763 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791825 4763 flags.go:64] FLAG: --pods-per-core="0" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791837 4763 flags.go:64] FLAG: --port="10250" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791849 4763 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791860 4763 flags.go:64] FLAG: --provider-id="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791870 4763 flags.go:64] FLAG: --qos-reserved="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791880 4763 flags.go:64] FLAG: --read-only-port="10255" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791889 4763 flags.go:64] FLAG: --register-node="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791898 4763 flags.go:64] FLAG: --register-schedulable="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791907 4763 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791923 4763 flags.go:64] FLAG: --registry-burst="10" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791932 4763 flags.go:64] FLAG: --registry-qps="5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791941 4763 flags.go:64] FLAG: --reserved-cpus="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791965 4763 flags.go:64] FLAG: --reserved-memory="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791986 4763 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.791995 4763 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792004 4763 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792013 4763 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792022 4763 flags.go:64] FLAG: --runonce="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792030 4763 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792040 4763 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792049 4763 flags.go:64] FLAG: --seccomp-default="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792058 4763 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792067 4763 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792076 4763 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792085 4763 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792094 4763 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792103 4763 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792112 4763 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792121 4763 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792129 4763 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792138 4763 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792148 4763 flags.go:64] FLAG: --system-cgroups="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792156 4763 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792171 4763 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792180 4763 flags.go:64] FLAG: --tls-cert-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792188 4763 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792200 4763 flags.go:64] FLAG: --tls-min-version="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792209 4763 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792218 4763 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792228 4763 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792236 4763 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792245 4763 flags.go:64] FLAG: --v="2" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792257 4763 flags.go:64] FLAG: --version="false" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792268 4763 flags.go:64] FLAG: --vmodule="" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792279 4763 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.792291 4763 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792575 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792588 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792607 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792616 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792624 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792632 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792641 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792649 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792659 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792668 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792676 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792684 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792724 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792733 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792743 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792754 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792762 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792771 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792781 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792791 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792812 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792825 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792836 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792846 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792855 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792864 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792872 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792881 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792889 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792896 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792904 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792913 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792921 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792928 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792936 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792943 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792951 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792959 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792978 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792987 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.792995 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793003 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793010 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793018 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793026 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793033 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793041 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793048 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793056 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793064 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793072 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793079 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793087 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793095 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793103 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793112 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793121 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793131 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793141 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793150 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793158 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793167 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793176 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793184 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793193 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793202 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793210 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793217 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793225 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793234 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.793241 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.794499 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.811750 4763 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.811827 4763 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.811983 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812000 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812009 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812018 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812028 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812036 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812045 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812053 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812060 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812068 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812076 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812085 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812092 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812100 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812108 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812117 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812126 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812136 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812148 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812158 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812167 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812175 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812187 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812201 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812210 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812221 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812230 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812239 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812247 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812258 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812267 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812275 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812283 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812291 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812299 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812307 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812315 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812323 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812331 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812339 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812347 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812355 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812363 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812371 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812379 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812388 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812399 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812414 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812428 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812445 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812458 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812470 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812483 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812497 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812511 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812523 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812532 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812541 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812549 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812558 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812566 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812575 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812583 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812591 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812599 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812606 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812614 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812622 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812630 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812638 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.812645 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.812660 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813157 4763 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813175 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813184 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813195 4763 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813208 4763 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813219 4763 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813228 4763 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813238 4763 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813247 4763 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813256 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813264 4763 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813273 4763 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813280 4763 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813291 4763 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813301 4763 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813311 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813320 4763 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813329 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813337 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813345 4763 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813354 4763 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813362 4763 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813369 4763 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813378 4763 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813385 4763 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813394 4763 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813402 4763 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813412 4763 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813421 4763 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813429 4763 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813438 4763 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813447 4763 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813455 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813463 4763 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813473 4763 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813482 4763 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813490 4763 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813497 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813505 4763 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813514 4763 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813521 4763 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813530 4763 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813538 4763 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813547 4763 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813555 4763 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813563 4763 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813572 4763 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813579 4763 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813587 4763 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813595 4763 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813603 4763 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813611 4763 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813621 4763 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813631 4763 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813640 4763 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813649 4763 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813658 4763 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813666 4763 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813674 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813682 4763 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813689 4763 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813719 4763 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813727 4763 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813735 4763 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813743 4763 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813787 4763 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813795 4763 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813803 4763 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813811 4763 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813819 4763 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.813827 4763 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.813839 4763 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.814201 4763 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.820874 4763 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.821042 4763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.822959 4763 server.go:997] "Starting client certificate rotation" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823012 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823289 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 22:42:00.626152106 +0000 UTC Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.823487 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.856979 4763 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.860962 4763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.861413 4763 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.881436 4763 log.go:25] "Validated CRI v1 runtime API" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.923581 4763 log.go:25] "Validated CRI v1 image API" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.925834 4763 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.932594 4763 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-14-50-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.932643 4763 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.963815 4763 manager.go:217] Machine: {Timestamp:2026-01-31 14:54:40.960140973 +0000 UTC m=+0.714879356 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dae69c69-4f41-4a04-af59-12d21fa5088f BootID:b7852931-3d3a-417c-b1dc-4eae70947913 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3d:f1:01 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3d:f1:01 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:60:29:6d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c5:0c:c9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e1:e3:cc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c6:d8:c8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:15:bc:98:93:c1 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c6:8a:2f:39:7e:c5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964289 4763 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964458 4763 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964688 4763 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964876 4763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.964915 4763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965112 4763 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965124 4763 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965608 4763 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.965640 4763 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.966292 4763 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.966370 4763 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973554 4763 kubelet.go:418] "Attempting to sync node with API server" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973580 4763 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973603 4763 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973614 4763 kubelet.go:324] "Adding apiserver pod source" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.973628 4763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.977208 4763 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.977981 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.978153 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.977980 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.978299 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.978181 4763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.980308 4763 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982034 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982058 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982065 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982071 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982082 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982090 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982098 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982109 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982119 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982129 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982139 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982146 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.982996 4763 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.983379 4763 server.go:1280] "Started kubelet" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984491 4763 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984518 4763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.984577 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.985539 4763 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.986751 4763 server.go:460] "Adding debug handlers to kubelet server" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987459 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987747 4763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987791 4763 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987807 4763 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.987901 4763 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.987763 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:30:19.92375395 +0000 UTC Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988060 4763 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988940 4763 factory.go:55] Registering systemd factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.988974 4763 factory.go:221] Registration of the systemd container factory successfully Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.991157 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 31 14:54:40 crc kubenswrapper[4763]: W0131 14:54:40.989229 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:40 crc kubenswrapper[4763]: E0131 14:54:40.991384 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.991441 4763 factory.go:153] Registering CRI-O factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.991464 4763 factory.go:221] Registration of the crio container factory successfully Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993137 4763 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993225 4763 factory.go:103] Registering Raw factory Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.993367 4763 manager.go:1196] Started watching for new ooms in manager Jan 31 14:54:40 crc kubenswrapper[4763]: I0131 14:54:40.996735 4763 manager.go:319] Starting recovery of all containers Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.002255 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fd88d892daaee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:54:40.983354094 +0000 UTC m=+0.738092387,LastTimestamp:2026-01-31 14:54:40.983354094 +0000 UTC m=+0.738092387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.011947 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012006 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012022 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012038 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012051 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012064 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012078 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012090 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012107 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012120 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012140 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012154 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012167 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012185 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012199 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012211 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012223 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012239 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012254 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012270 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012283 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012296 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012311 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012320 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012357 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012371 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012385 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012399 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012413 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012427 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012465 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012480 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012515 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012530 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012544 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012558 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012571 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012586 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012600 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.012613 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.014927 4763 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015034 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015176 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015383 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015512 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015620 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015851 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.015949 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016278 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016398 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016552 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016655 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016763 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016899 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.016996 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018184 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018448 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.018940 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019029 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019109 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.019185 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020195 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020297 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020382 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020474 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020555 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020639 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020748 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020832 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.020919 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021000 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021072 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021146 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021221 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021295 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021373 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021456 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021541 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021619 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021716 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021814 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021900 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.021983 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022065 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022155 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022234 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022318 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022397 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022481 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022564 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022648 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022757 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022836 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022912 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.022990 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023063 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023144 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023218 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023294 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023370 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023466 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023581 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023672 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023782 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023853 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023921 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.023981 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024053 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024115 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024177 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024234 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024295 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024355 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024412 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024485 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024561 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024640 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024815 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024908 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.024987 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025061 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025136 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025216 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025302 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025388 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025452 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025506 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025558 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025614 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025667 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025737 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025801 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025879 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.025988 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026073 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026144 4763 manager.go:324] Recovery completed Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026150 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026480 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026498 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026513 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026525 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026536 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026547 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026557 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026568 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026579 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026590 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026602 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026613 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026623 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026633 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026643 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026654 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026664 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026675 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026685 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026708 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026717 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026727 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026737 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026748 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026758 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026768 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026779 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026789 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026800 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026810 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026820 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026830 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026843 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026853 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026862 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026873 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026883 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026893 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026902 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026913 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026925 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026935 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026945 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026956 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026967 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026978 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026987 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.026999 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027008 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027018 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027028 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027039 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027049 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027059 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027070 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027081 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027091 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027101 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027112 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027130 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027141 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027152 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027162 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027171 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027182 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027193 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027203 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027213 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027224 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027235 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027244 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027254 4763 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027275 4763 reconstruct.go:97] "Volume reconstruction finished" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.027283 4763 reconciler.go:26] "Reconciler: start to sync state" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.038565 4763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040425 4763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040457 4763 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040482 4763 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.040522 4763 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.040623 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.044124 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.044204 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047165 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047972 4763 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.047987 4763 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.048006 4763 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.075245 4763 policy_none.go:49] "None policy: Start" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.076271 4763 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.076308 4763 state_mem.go:35] "Initializing new in-memory state store" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.088024 4763 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.124937 4763 manager.go:334] "Starting Device Plugin manager" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.124992 4763 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125008 4763 server.go:79] "Starting device plugin registration server" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125515 4763 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125531 4763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125823 4763 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125920 4763 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.125930 4763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.137607 4763 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.141363 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.141476 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142566 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142788 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.142953 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143000 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.143936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144050 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144375 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144445 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.144479 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145497 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.145551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146326 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146449 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.146480 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147326 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147348 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147627 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147935 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.147971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148177 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148211 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148273 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.148320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.149468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.192365 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.225951 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.227128 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.227786 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229307 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229345 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229376 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229404 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229465 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229497 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229617 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229732 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229774 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229826 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.229865 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331509 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331617 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331746 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331809 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331859 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331905 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331946 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331941 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331994 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332020 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332062 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331972 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332099 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.331993 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332109 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332678 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332167 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332846 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332894 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333002 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333142 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.332029 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333059 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333402 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333761 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.333883 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.427935 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429820 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.429856 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.430465 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.481034 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.498869 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.522865 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.529105 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.532654 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.545182 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944 WatchSource:0}: Error finding container f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944: Status 404 returned error can't find the container with id f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944 Jan 31 14:54:41 crc kubenswrapper[4763]: W0131 14:54:41.561036 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff WatchSource:0}: Error finding container 7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff: Status 404 returned error can't find the container with id 7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.593624 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.830942 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833132 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.833165 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: E0131 14:54:41.833708 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.985683 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:41 crc kubenswrapper[4763]: I0131 14:54:41.988660 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:16:43.21550503 +0000 UTC Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.050029 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56f4eabf9c31133b3b1167a4f704d076770dfdeb48377d30b1cbdecd3f09aea8"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.051242 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2d283ccf794b468b3f6c90bbb15e3f278f236a6cd071e253067dcb1b41561db7"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.053640 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7eca7a880f14cf128c6b544a65025bd1182729357b3a714fa5dcfe8cd769b5ff"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.054896 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3b7f1e78114cf80c259c356353ecdc487963868489acd3ee46e8e8a71cf3128c"} Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.055837 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2dc4ef8dd1936996f421598f158e81d6e2fd49b7c2848d644d4df2ca35ad944"} Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.068625 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.068745 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.095964 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.096111 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.097382 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.097458 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: W0131 14:54:42.362486 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.362590 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.394889 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.634432 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.637247 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:42 crc kubenswrapper[4763]: E0131 14:54:42.638048 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.986287 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:42 crc kubenswrapper[4763]: I0131 14:54:42.989420 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:24:26.794374122 +0000 UTC Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.041094 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:43 crc kubenswrapper[4763]: E0131 14:54:43.042393 4763 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.064828 4763 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.065049 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.065835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067629 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.067648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069265 4763 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069332 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.069441 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.071503 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075402 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075479 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075501 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075520 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.075657 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.079892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081745 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081864 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.081877 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.083796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085105 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc" exitCode=0 Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085166 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc"} Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.085329 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086719 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.086808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.087059 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089527 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.089597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.985541 4763 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:43 crc kubenswrapper[4763]: I0131 14:54:43.989714 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:38:07.886693214 +0000 UTC Jan 31 14:54:43 crc kubenswrapper[4763]: E0131 14:54:43.996655 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091048 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091148 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091164 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.091275 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.092268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095234 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.095293 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.099781 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f" exitCode=0 Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.099863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.100013 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.100941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.101033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.101048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105815 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105844 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dd4c38511fece2af6df3bb93ecff7c793bbf4320c7b78e9996fa88a8775d2752"} Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.105815 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107099 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107208 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.107221 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.143421 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.143502 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.238756 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.239984 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240037 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240050 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.240083 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.240579 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.369289 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.407580 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.407732 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: W0131 14:54:44.434085 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 31 14:54:44 crc kubenswrapper[4763]: E0131 14:54:44.434199 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:54:44 crc kubenswrapper[4763]: I0131 14:54:44.990917 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:28:59.670206889 +0000 UTC Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110654 4763 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd" exitCode=0 Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd"} Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.110844 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112108 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.112182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117159 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b"} Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117244 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117307 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117250 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117350 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.117419 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118590 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.118650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119083 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.119840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.394244 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.598269 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.737266 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.745738 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:45 crc kubenswrapper[4763]: I0131 14:54:45.991600 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:35:29.996977029 +0000 UTC Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123858 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123907 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.123944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90"} Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.124041 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.124175 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.125288 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:46 crc kubenswrapper[4763]: I0131 14:54:46.992452 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:01:14.529493056 +0000 UTC Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6"} Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142"} Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132589 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132606 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.132739 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.134440 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.135433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.393312 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.441409 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.443110 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.476294 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:47 crc kubenswrapper[4763]: I0131 14:54:47.992977 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:44:17.16274971 +0000 UTC Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.134923 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.135101 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.136738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:48 crc kubenswrapper[4763]: I0131 14:54:48.993402 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:54:13.725015816 +0000 UTC Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.148274 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.148511 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.149815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.941058 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.941312 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.942753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:49 crc kubenswrapper[4763]: I0131 14:54:49.994425 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:11:26.188769159 +0000 UTC Jan 31 14:54:50 crc kubenswrapper[4763]: I0131 14:54:50.994753 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:10:36.532020552 +0000 UTC Jan 31 14:54:51 crc kubenswrapper[4763]: E0131 14:54:51.138634 4763 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.833405 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.833760 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.835555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:51 crc kubenswrapper[4763]: I0131 14:54:51.994911 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:33:51.370631304 +0000 UTC Jan 31 14:54:52 crc kubenswrapper[4763]: I0131 14:54:52.995422 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:30:27.725424238 +0000 UTC Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.078547 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.079515 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081349 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.081370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.086850 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.148859 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.150316 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:53 crc kubenswrapper[4763]: I0131 14:54:53.996430 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:10:29.0394643 +0000 UTC Jan 31 14:54:54 crc kubenswrapper[4763]: W0131 14:54:54.805269 4763 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.805430 4763 trace.go:236] Trace[1269472654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:44.803) (total time: 10001ms): Jan 31 14:54:54 crc kubenswrapper[4763]: Trace[1269472654]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:54:54.805) Jan 31 14:54:54 crc kubenswrapper[4763]: Trace[1269472654]: [10.001634467s] [10.001634467s] END Jan 31 14:54:54 crc kubenswrapper[4763]: E0131 14:54:54.805468 4763 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.973208 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.973312 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.980183 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.980265 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:54:54 crc kubenswrapper[4763]: I0131 14:54:54.996902 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:15:36.086558246 +0000 UTC Jan 31 14:54:55 crc kubenswrapper[4763]: I0131 14:54:55.997059 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:20:58.127190937 +0000 UTC Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.078677 4763 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.078829 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:54:56 crc kubenswrapper[4763]: I0131 14:54:56.997445 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 00:06:22.663350072 +0000 UTC Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.478217 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.478373 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479226 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479324 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479725 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.479912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.485202 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:54:57 crc kubenswrapper[4763]: I0131 14:54:57.998441 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:41:34.651640572 +0000 UTC Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.026255 4763 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.162759 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.163098 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.985810 4763 apiserver.go:52] "Watching apiserver" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.990640 4763 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991093 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991572 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.991758 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.992066 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.991967 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992241 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992590 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:54:58 crc kubenswrapper[4763]: E0131 14:54:58.992739 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.992360 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.994443 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.995039 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.996450 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997185 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997360 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997273 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997589 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997885 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.997942 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:54:58 crc kubenswrapper[4763]: I0131 14:54:58.998642 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:00:11.036378911 +0000 UTC Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.065500 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.081130 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.088566 4763 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.094932 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.105663 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.115356 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.125133 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.135778 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.145196 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.164915 4763 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.164966 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:54:59 crc kubenswrapper[4763]: E0131 14:54:59.955473 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.957787 4763 trace.go:236] Trace[561870413]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:48.441) (total time: 11516ms): Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[561870413]: ---"Objects listed" error: 11516ms (14:54:59.957) Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[561870413]: [11.516350693s] [11.516350693s] END Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.958065 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.958543 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: E0131 14:54:59.959541 4763 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.961412 4763 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.966241 4763 trace.go:236] Trace[996093763]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:54:49.607) (total time: 10358ms): Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[996093763]: ---"Objects listed" error: 10358ms (14:54:59.966) Jan 31 14:54:59 crc kubenswrapper[4763]: Trace[996093763]: [10.358442512s] [10.358442512s] END Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.966269 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.982414 4763 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:54:59 crc kubenswrapper[4763]: I0131 14:54:59.998899 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:46:39.324159494 +0000 UTC Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.008089 4763 csr.go:261] certificate signing request csr-vnnb4 is approved, waiting to be issued Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.014217 4763 csr.go:257] certificate signing request csr-vnnb4 is issued Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.062545 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.062862 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063026 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063174 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063277 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063112 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063331 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063538 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063655 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063773 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063792 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063860 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063885 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063903 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063918 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063936 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063980 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.063997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064014 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064799 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.064937 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065190 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065246 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065327 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065376 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065405 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065426 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065447 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065474 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065497 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065516 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065567 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065592 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065594 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065616 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065636 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065681 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065719 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065742 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065750 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065765 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065790 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065812 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065833 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065855 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065856 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065929 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.065967 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066004 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066030 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066048 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066089 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066119 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066129 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066205 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066210 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066247 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066342 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066370 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066409 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066454 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066457 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066507 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066543 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066564 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066589 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066615 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066643 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066667 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066672 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066718 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066732 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066798 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066836 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066870 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066886 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066937 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.066975 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067016 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067057 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067063 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067086 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067108 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067191 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067233 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067274 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067297 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067452 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067487 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067534 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067550 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067573 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067607 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067651 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067676 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067718 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067755 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067791 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067886 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067900 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.067959 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068137 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068178 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068212 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068216 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068288 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068626 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068812 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068452 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068371 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068611 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.068874 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.070273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071476 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.070965 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071686 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071766 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071681 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.071927 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.072158 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.072200 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.081895 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.081935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082110 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.069079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082274 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082429 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082571 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.082969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083033 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.083065 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.583043943 +0000 UTC m=+20.337782236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083098 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083126 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083182 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083202 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083221 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083240 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083278 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083296 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083316 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083368 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083405 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083440 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083458 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083476 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083493 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083510 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083527 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083547 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083569 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083594 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083620 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083638 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083731 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083749 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083766 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083784 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083820 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083839 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083856 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083888 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083907 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083928 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083952 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083970 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083986 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084003 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084020 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084037 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084056 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084077 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084107 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084134 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084160 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084185 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084209 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084235 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084261 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084285 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084335 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084413 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084438 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084482 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084504 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084529 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084555 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084574 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084614 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084631 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084647 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084665 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085228 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085267 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085301 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085343 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085368 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085390 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085414 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085436 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085454 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085471 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085488 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085506 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085525 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085562 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085579 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085631 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085650 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085668 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085686 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085724 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085741 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085762 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085780 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085798 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085820 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085869 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085910 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085957 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085976 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085995 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086041 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086070 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086094 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086142 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086161 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086206 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086242 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086279 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086316 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086336 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086417 4763 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086429 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086441 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086452 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086462 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086473 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086483 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086495 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086506 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086515 4763 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086524 4763 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086535 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086544 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086553 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086563 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086572 4763 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086581 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086590 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086600 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086609 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086619 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086628 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086638 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086649 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086659 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086671 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086681 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086705 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086715 4763 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086727 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086736 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086744 4763 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086754 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086763 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086773 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086782 4763 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086792 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086800 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086812 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086826 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086838 4763 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086850 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086862 4763 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086875 4763 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086884 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086895 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086905 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086914 4763 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086924 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086933 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086942 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086951 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.086960 4763 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094813 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096802 4763 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083258 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083288 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083420 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083560 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109605 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083712 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.083830 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084046 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084235 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084256 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084416 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084508 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084595 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.084910 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085142 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085714 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.085903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.092131 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.091968 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.092538 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.093578 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.093954 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094099 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.094310 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094378 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.094405 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094570 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094651 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094783 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094912 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.094967 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095190 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.095518 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096157 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.096343 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.097460 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.097945 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098116 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098171 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098680 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.098935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.099225 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.099876 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100055 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100232 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100396 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.100555 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.106589 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107025 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107319 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107578 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107715 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107941 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.107940 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108391 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108732 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.108734 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108860 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.108952 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109354 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109678 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.110330 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.610307904 +0000 UTC m=+20.365046197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.110381 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.610371976 +0000 UTC m=+20.365110269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111571 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111733 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.111839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.111888 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.111911 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112334 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112396 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112512 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112682 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112876 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.112895 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.112995 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.612967986 +0000 UTC m=+20.367706279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113254 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113461 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113574 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113504 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113846 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113923 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.113960 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114134 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114828 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114851 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114845 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.114170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115377 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115530 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115810 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115833 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.115999 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116255 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116315 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116533 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116623 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.116833 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117009 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117234 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.117952 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118095 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118418 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.109210 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ghn8r"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.118759 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119144 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119197 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qcb97"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119317 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119548 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119731 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.119988 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120063 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120155 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120547 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120553 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.120969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.121937 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.122312 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.122539 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123522 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123529 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.123620 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124065 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124177 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124283 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.129045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124836 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.124871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.125037 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.131006 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131671 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131717 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.131733 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.132798 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:00.632777746 +0000 UTC m=+20.387516039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136175 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136299 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.136563 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.137364 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.137598 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.138115 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.138979 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139095 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139158 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139335 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.139681 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.141339 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.141976 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.142355 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143095 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143280 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.143838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.144470 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.144989 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.145399 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146455 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146684 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.146940 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.147776 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.148139 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.151327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.154821 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.154946 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.156506 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.164297 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.167717 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218660 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218719 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218738 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218795 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218835 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218893 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218906 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218915 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218924 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218932 4763 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218941 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218949 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218958 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218966 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218974 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218982 4763 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218990 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.218999 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219007 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219018 4763 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219027 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219036 4763 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219045 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219053 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219062 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219070 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219078 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219086 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219094 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219101 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219109 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219117 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219124 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219132 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219140 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219148 4763 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219155 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219163 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219172 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219179 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219186 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219194 4763 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219202 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219210 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219217 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219225 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219232 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219240 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219248 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219256 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219264 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219273 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219283 4763 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219291 4763 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219298 4763 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219306 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219314 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219322 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219330 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219338 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219346 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219354 4763 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219361 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219369 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219377 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219385 4763 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219393 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219401 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219409 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219417 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219424 4763 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219432 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219440 4763 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219448 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219456 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219465 4763 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219473 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219481 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219489 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219496 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219504 4763 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219512 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219522 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219531 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219539 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219547 4763 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219555 4763 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219563 4763 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219573 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219580 4763 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219588 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219596 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219605 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219613 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219621 4763 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219629 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219637 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219645 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219653 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219661 4763 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219669 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219676 4763 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219685 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219709 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219718 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219726 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219734 4763 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219743 4763 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219750 4763 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219758 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219766 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219773 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219781 4763 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219789 4763 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219797 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219804 4763 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219813 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219820 4763 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219828 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219837 4763 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219844 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219852 4763 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219859 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219867 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219875 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219882 4763 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219890 4763 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219898 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219906 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219913 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219921 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219929 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219937 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219945 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219953 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219960 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219968 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219976 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.219994 4763 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220002 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220009 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220017 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220025 4763 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220033 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220041 4763 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220050 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220185 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.220236 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.221214 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.223167 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.224850 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.230670 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.236241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.248992 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.280756 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.300080 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320470 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320517 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320536 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.320829 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82ab6d11-5754-4903-ac36-bb0279dfa1fa-host\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.321188 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/69a7de9b-f4a3-408b-8b12-570db6fcd84f-hosts-file\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.321710 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/82ab6d11-5754-4903-ac36-bb0279dfa1fa-serviceca\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.339985 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.340277 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrxh\" (UniqueName: \"kubernetes.io/projected/82ab6d11-5754-4903-ac36-bb0279dfa1fa-kube-api-access-vlrxh\") pod \"node-ca-ghn8r\" (UID: \"82ab6d11-5754-4903-ac36-bb0279dfa1fa\") " pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.341366 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzpgh\" (UniqueName: \"kubernetes.io/projected/69a7de9b-f4a3-408b-8b12-570db6fcd84f-kube-api-access-gzpgh\") pod \"node-resolver-qcb97\" (UID: \"69a7de9b-f4a3-408b-8b12-570db6fcd84f\") " pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.356008 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.372871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.384126 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.390771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.401510 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.423688 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.433741 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.454241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.455477 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ghn8r" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.457932 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qcb97" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.481726 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a7de9b_f4a3_408b_8b12_570db6fcd84f.slice/crio-da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3 WatchSource:0}: Error finding container da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3: Status 404 returned error can't find the container with id da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3 Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.487067 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ab6d11_5754_4903_ac36_bb0279dfa1fa.slice/crio-1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17 WatchSource:0}: Error finding container 1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17: Status 404 returned error can't find the container with id 1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17 Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503040 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-npvkf"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503806 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qzkhg"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.503931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.504683 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9wp2x"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.504883 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.505821 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.508916 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509099 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509253 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509447 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509594 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509632 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509889 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.509931 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.510040 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.513654 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.514425 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.514632 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521626 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521647 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521679 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521716 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521738 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521757 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521899 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521943 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.521978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522002 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522028 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522051 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522075 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522098 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522121 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522191 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522212 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522262 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522288 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522325 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522365 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522388 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522422 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522445 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.522470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.523547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.540828 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.563214 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.574118 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.585111 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.598019 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.607771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.621407 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623005 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623086 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623149 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623177 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623193 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623207 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623291 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623323 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623340 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623360 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623376 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623391 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623404 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623420 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623433 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623448 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-hostroot\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623464 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623502 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-system-cni-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623536 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623542 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-multus\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623568 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-etc-kubernetes\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623582 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623599 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623604 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-cnibin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623615 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623626 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-netns\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623655 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-system-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623649 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-os-release\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623924 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-os-release\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.623859 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-multus-certs\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.623997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.62396762 +0000 UTC m=+21.378705913 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624068 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-cni-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624068 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624104 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624135 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624126374 +0000 UTC m=+21.378864667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624152 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624144575 +0000 UTC m=+21.378882868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624157 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-kubelet\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624189 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-cni-binary-copy\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-run-k8s-cni-cncf-io\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-conf-dir\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624479 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624580 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624601 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624614 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624623 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-host-var-lib-cni-bin\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624642 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/081252dc-3eaa-4608-8b06-16c377dff2e7-cni-binary-copy\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.624664 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.624644358 +0000 UTC m=+21.379382821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2335d04f-10b2-4cf8-aae6-236650539c74-multus-socket-dir-parent\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624707 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9d1f3628-a7fe-4094-a313-96c0469fcf78-rootfs\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.624713 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/081252dc-3eaa-4608-8b06-16c377dff2e7-cnibin\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.625189 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d1f3628-a7fe-4094-a313-96c0469fcf78-mcd-auth-proxy-config\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.625685 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2335d04f-10b2-4cf8-aae6-236650539c74-multus-daemon-config\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.628284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d1f3628-a7fe-4094-a313-96c0469fcf78-proxy-tls\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.639989 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641061 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tglt\" (UniqueName: \"kubernetes.io/projected/081252dc-3eaa-4608-8b06-16c377dff2e7-kube-api-access-4tglt\") pod \"multus-additional-cni-plugins-npvkf\" (UID: \"081252dc-3eaa-4608-8b06-16c377dff2e7\") " pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641467 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkx2t\" (UniqueName: \"kubernetes.io/projected/9d1f3628-a7fe-4094-a313-96c0469fcf78-kube-api-access-pkx2t\") pod \"machine-config-daemon-9wp2x\" (UID: \"9d1f3628-a7fe-4094-a313-96c0469fcf78\") " pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.641670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh4pm\" (UniqueName: \"kubernetes.io/projected/2335d04f-10b2-4cf8-aae6-236650539c74-kube-api-access-zh4pm\") pod \"multus-qzkhg\" (UID: \"2335d04f-10b2-4cf8-aae6-236650539c74\") " pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.650782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.674005 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.686069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.695907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.710584 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.720603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.725345 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725620 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725657 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725675 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: E0131 14:55:00.725774 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:01.725746807 +0000 UTC m=+21.480485100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.727963 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.737660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.747052 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.755029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.765076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.774569 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.785225 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.818665 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npvkf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.822850 4763 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823138 4763 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823202 4763 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823245 4763 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823297 4763 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823451 4763 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823495 4763 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823578 4763 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823613 4763 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823645 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823671 4763 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823720 4763 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823745 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823751 4763 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823771 4763 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823793 4763 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823812 4763 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823842 4763 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823935 4763 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823981 4763 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.823795 4763 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.824033 4763 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.824068 4763 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.835779 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod081252dc_3eaa_4608_8b06_16c377dff2e7.slice/crio-4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3 WatchSource:0}: Error finding container 4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3: Status 404 returned error can't find the container with id 4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3 Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.835890 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qzkhg" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.845327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:55:00 crc kubenswrapper[4763]: W0131 14:55:00.850482 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2335d04f_10b2_4cf8_aae6_236650539c74.slice/crio-dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa WatchSource:0}: Error finding container dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa: Status 404 returned error can't find the container with id dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.868285 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.872502 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.875794 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.875954 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.876797 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877132 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877230 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877154 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.877153 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.887941 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.901032 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.912220 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.924525 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933077 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933553 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933747 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933835 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.933961 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934074 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934115 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934147 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934264 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934296 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934366 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934399 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934431 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934468 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934504 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.934534 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.936941 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.953007 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.961381 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.972484 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.982891 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.990323 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.998789 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:00 crc kubenswrapper[4763]: I0131 14:55:00.999854 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:12:50.380761955 +0000 UTC Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.013910 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.017785 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 14:50:00 +0000 UTC, rotation deadline is 2026-12-10 12:48:20.161920154 +0000 UTC Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.017846 4763 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7509h53m19.144077308s for next certificate rotation Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.027412 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.034995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035027 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035057 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035079 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035095 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035124 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035139 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035210 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035225 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035239 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035272 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035295 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035331 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035334 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035398 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035402 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035501 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035740 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035773 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035808 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035843 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035874 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035906 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035943 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.035959 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036043 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036085 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036333 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036386 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.036979 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.037554 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041072 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041088 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.041077 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041368 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041536 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.041660 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.042805 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.047847 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.048522 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.050129 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.050914 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.052046 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.052616 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.054065 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.054750 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.060104 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.062387 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.064257 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"ovnkube-node-dtknf\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.068264 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.069089 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070111 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070681 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.071684 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.072348 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.072955 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.073812 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.074458 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.070624 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.075204 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.078012 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.078740 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.081845 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.083480 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.084022 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.088442 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.089838 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.090442 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.091403 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.091967 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.095160 4763 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.095368 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.097067 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.098215 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.098718 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.100436 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.102585 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.103175 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.103961 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.104653 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.105491 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.106541 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.107209 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.111418 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.112233 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.113154 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.113757 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.114654 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.115076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.115657 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.116664 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.117237 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.117766 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.118824 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.119481 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.120417 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.135459 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.167406 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.170995 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.171186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.171249 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"93a82828283ccc16045851f8b34e6d568ee811e428f2d570bded649eae30abcd"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.175008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ghn8r" event={"ID":"82ab6d11-5754-4903-ac36-bb0279dfa1fa","Type":"ContainerStarted","Data":"f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.175470 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ghn8r" event={"ID":"82ab6d11-5754-4903-ac36-bb0279dfa1fa","Type":"ContainerStarted","Data":"1873317d9abd52bf61e30e7a2e558923661088f717636c4e0502de37adebaa17"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183169 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcb97" event={"ID":"69a7de9b-f4a3-408b-8b12-570db6fcd84f","Type":"ContainerStarted","Data":"874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.183391 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qcb97" event={"ID":"69a7de9b-f4a3-408b-8b12-570db6fcd84f","Type":"ContainerStarted","Data":"da140469ad6988e8d25c0e3674b0f930ac0ac71918a29cd12475c0d7eab891d3"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186346 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.186407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d5eee5cbb2354072449a4933534633c6ba1ae66562f3fa6e91cdf8d8d36fd740"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.193363 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.193411 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"84f3fd9a473f804a51034f94da7ed19a66a7e9fc3fc0ae0637f9c763fcf7a771"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.194863 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.200310 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.200367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"dc3cca4c513eb1e067e9109deb19ff574fc1649f8e97e78da007d65b15e333aa"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.203926 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.204531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.204562 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"4226d3cba5c5a7097aab57c10efbec3cc6acd0eaf9923a086c6491bfae8420f3"} Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.206056 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0de505e43032dbbbc163fc9a504e070fa5e6f59ee0c408b0059d72965bbce8bd"} Jan 31 14:55:01 crc kubenswrapper[4763]: W0131 14:55:01.207228 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047ce610_09fa_482b_8d29_45ad376d12b3.slice/crio-be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3 WatchSource:0}: Error finding container be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3: Status 404 returned error can't find the container with id be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3 Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.249008 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.285734 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.333856 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.365036 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.408951 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.451028 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.486107 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.532414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.564050 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.604020 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.641977 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642142 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642176 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642145027 +0000 UTC m=+23.396883340 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.642225 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642250 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642304 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642289201 +0000 UTC m=+23.397027504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642435 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642455 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642468 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642507 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642497676 +0000 UTC m=+23.397236039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642551 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.642579 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.642569088 +0000 UTC m=+23.397307471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.653907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.656743 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.707937 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.737209 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.743216 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743343 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743358 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743370 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: E0131 14:55:01.743419 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:03.74340666 +0000 UTC m=+23.498144953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.763983 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.797151 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.826795 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.877634 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.893546 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.906197 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.906919 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.917580 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.937081 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.971453 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 14:55:01 crc kubenswrapper[4763]: I0131 14:55:01.997766 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.000026 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:37:28.768700965 +0000 UTC Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.017346 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.037715 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.067462 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.096785 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.125530 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.165623 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.177054 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.196656 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.203045 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.209837 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021" exitCode=0 Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.209905 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211748 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549" exitCode=0 Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211822 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.211873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3"} Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.237529 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:55:02 crc kubenswrapper[4763]: E0131 14:55:02.278381 4763 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.287854 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.297440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.317030 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.358441 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.385258 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.397419 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.417225 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.457475 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.480468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.507822 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.544514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.589012 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.627847 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.666491 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.712575 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.750727 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.786098 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.828770 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.864649 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.914629 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.956857 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:02 crc kubenswrapper[4763]: I0131 14:55:02.988802 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.001080 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:29:44.216048185 +0000 UTC Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.025179 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040879 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040964 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041040 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041109 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.040966 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.041197 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.064276 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.082793 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.087924 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.105806 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.124629 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.166560 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.207061 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.215436 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.217471 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222734 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222767 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222777 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.222785 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8"} Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.250393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.279159 4763 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.325561 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.357136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.384347 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.428993 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.469423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.507105 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.551445 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.599370 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.629651 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.660803 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.660955 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.661007 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661036 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661003812 +0000 UTC m=+27.415742105 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.661087 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661142 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661213 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661191917 +0000 UTC m=+27.415930240 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661246 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661417 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661384992 +0000 UTC m=+27.416123325 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661273 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661477 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661504 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.661552 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.661538676 +0000 UTC m=+27.416277009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.663469 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.709019 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.761151 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.762069 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762340 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762397 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762421 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: E0131 14:55:03.762503 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:07.762475291 +0000 UTC m=+27.517213614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.792467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.833435 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.868286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.911527 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.952378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:03 crc kubenswrapper[4763]: I0131 14:55:03.993790 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.002139 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:03:09.108046711 +0000 UTC Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.030617 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.073822 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.119296 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.148785 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.229944 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138" exitCode=0 Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.230023 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.235985 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.236028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e"} Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.265843 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.293083 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.313253 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.332376 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.344138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.387379 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.426647 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.462711 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.506460 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.551667 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.595123 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.627121 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.668814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.706311 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:04 crc kubenswrapper[4763]: I0131 14:55:04.746558 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.003667 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:15:59.249176626 +0000 UTC Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041000 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041151 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.041323 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.041931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.042104 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:05 crc kubenswrapper[4763]: E0131 14:55:05.042207 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.243286 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226" exitCode=0 Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.243337 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226"} Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.278352 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.294502 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.309362 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.323689 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.336665 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.364268 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.382043 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.402581 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.417410 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.428323 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.448567 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.463826 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.479170 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.490116 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:05 crc kubenswrapper[4763]: I0131 14:55:05.505007 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.004443 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:16:07.452523393 +0000 UTC Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.253044 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.257491 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830" exitCode=0 Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.257550 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.284672 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.301588 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.315476 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.333387 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.360270 4763 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.365391 4763 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.370654 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.374403 4763 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.374613 4763 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375541 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375553 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.375581 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.392443 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.397653 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.401427 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.412837 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.413871 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.417918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.426188 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.432426 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.436335 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.438760 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.451196 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.452412 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.455481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.467753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.469960 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: E0131 14:55:06.470178 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471832 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.471882 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.482332 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.495220 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.510193 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.521387 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.574463 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.678973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.679013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.679039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.781474 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.883977 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.884072 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:06 crc kubenswrapper[4763]: I0131 14:55:06.986856 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:06Z","lastTransitionTime":"2026-01-31T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.005133 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:26:33.7207247 +0000 UTC Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041524 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041609 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.041718 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.041754 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.041854 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.042019 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089813 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.089884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192615 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.192647 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.266186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.293571 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.294474 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.309939 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.323029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.337443 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.347769 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.359664 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.377980 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.396976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397017 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.397066 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.398350 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.408806 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.418270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.429028 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.441740 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.459788 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.477380 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.490556 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.499569 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.602768 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704254 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704449 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704395413 +0000 UTC m=+35.459133716 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704576 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704616 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.704659 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704735 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704810 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704828 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704832 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704809004 +0000 UTC m=+35.459547337 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704842 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704884 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.704873746 +0000 UTC m=+35.459612049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.704925 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.705061 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.70502729 +0000 UTC m=+35.459765633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707543 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707562 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.707602 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.805418 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805751 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805815 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805842 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: E0131 14:55:07.805945 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.805913454 +0000 UTC m=+35.560651807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811709 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.811752 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913720 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913799 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:07 crc kubenswrapper[4763]: I0131 14:55:07.913815 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:07Z","lastTransitionTime":"2026-01-31T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.005774 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:37:46.451819706 +0000 UTC Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016349 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.016517 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119074 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119185 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.119201 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.221911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.221991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.222077 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.276336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.276866 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.284824 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073" exitCode=0 Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.284872 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.291038 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.305241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324891 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.324900 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.326495 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.342083 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.343654 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.361764 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.372988 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.384683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.407428 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.420320 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.427626 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.435293 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.449594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.460603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.476143 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.494180 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.515221 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530522 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.530536 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.533351 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.544929 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.565506 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.582417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.604872 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.629791 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.633277 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.644744 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.657662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.668519 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.678965 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.692141 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.706997 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.717719 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.727578 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735080 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.735115 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.740169 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838307 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.838339 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.940973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:08 crc kubenswrapper[4763]: I0131 14:55:08.941087 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:08Z","lastTransitionTime":"2026-01-31T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.006752 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:08:53.710945004 +0000 UTC Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041558 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041561 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.041756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.041873 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.041555 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:09 crc kubenswrapper[4763]: E0131 14:55:09.042149 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.043979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.044003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.044021 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.146860 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.249889 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295571 4763 generic.go:334] "Generic (PLEG): container finished" podID="081252dc-3eaa-4608-8b06-16c377dff2e7" containerID="95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5" exitCode=0 Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295720 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerDied","Data":"95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.295868 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.296750 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.323161 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.345872 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354095 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354117 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.354134 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.359690 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.378270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.391202 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.406996 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.425153 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.441322 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.454499 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456020 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456243 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.456255 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.473503 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.492392 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.515886 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.534742 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.550938 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559172 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559240 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.559995 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.569638 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.577871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.590053 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.614992 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.626674 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.642507 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.655640 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662164 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.662226 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.664458 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.683871 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.700478 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.714217 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.727239 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.740420 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.750718 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.761931 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.764485 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.776780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.792349 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866426 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866461 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.866483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969598 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:09 crc kubenswrapper[4763]: I0131 14:55:09.969653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:09Z","lastTransitionTime":"2026-01-31T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.007887 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:15:40.281979072 +0000 UTC Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072122 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.072133 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177531 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177574 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.177595 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280488 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.280593 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.303531 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.304684 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" event={"ID":"081252dc-3eaa-4608-8b06-16c377dff2e7","Type":"ContainerStarted","Data":"2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.328364 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.342461 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.358553 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.383963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.384070 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.415893 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.428547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.442914 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.457017 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.468787 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.485269 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486181 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.486258 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.499774 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.513060 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.532753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.551897 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.566030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.577583 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.587960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.587997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.588039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690885 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.690924 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794263 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794321 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.794382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:10 crc kubenswrapper[4763]: I0131 14:55:10.897988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:10Z","lastTransitionTime":"2026-01-31T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000206 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.000294 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.008825 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:11:18.518001331 +0000 UTC Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041114 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041175 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041381 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.041404 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041530 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:11 crc kubenswrapper[4763]: E0131 14:55:11.041652 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.065782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.077102 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.085582 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.103578 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.107284 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.125328 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.142465 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.162514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.177414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.189088 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.204382 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206535 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.206562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.217261 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.238222 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.271143 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.289024 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.310981 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312838 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.312873 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.315437 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.321485 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" exitCode=1 Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.321558 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.322617 4763 scope.go:117] "RemoveContainer" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.325286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.358344 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.378138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.396972 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.413415 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415581 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.415619 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.431243 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.468224 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.488625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.512152 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.518883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.519519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.519612 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.536773 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.554355 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.572605 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.592500 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.610001 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621037 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.621332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.641477 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723918 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.723980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.724001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.724017 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826362 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.826372 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.929994 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.930021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:11 crc kubenswrapper[4763]: I0131 14:55:11.930043 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:11Z","lastTransitionTime":"2026-01-31T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.009757 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:45:32.168763744 +0000 UTC Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032594 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.032646 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135190 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135259 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.135299 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238109 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.238165 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.277456 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.326928 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.329866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.330021 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340853 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340929 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.340982 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.346168 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.362088 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.379724 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.412536 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443095 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443154 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.443218 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.447009 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.466062 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.487815 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.508618 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.532270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546276 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.546300 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.556284 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.576658 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.599892 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.619471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.635216 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.649176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.657814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752813 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752831 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.752875 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.855762 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958838 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:12 crc kubenswrapper[4763]: I0131 14:55:12.958901 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:12Z","lastTransitionTime":"2026-01-31T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.010322 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:04:19.638290402 +0000 UTC Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.045277 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.045440 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.045968 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.046088 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.046160 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.046235 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.334383 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.334930 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/0.log" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.336978 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" exitCode=1 Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337025 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337079 4763 scope.go:117] "RemoveContainer" containerID="5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.337948 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:13 crc kubenswrapper[4763]: E0131 14:55:13.338116 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.545438 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.558178 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.573780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.589055 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.611029 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.644289 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648719 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.648738 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.663747 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.681582 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.704560 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.723494 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.751633 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.756881 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.779436 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.798628 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.818554 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.838804 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.840895 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv"] Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.841448 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.845355 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.845799 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857235 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.857254 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.867869 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.888862 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.910286 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.928627 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.946004 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.960666 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:13Z","lastTransitionTime":"2026-01-31T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972460 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972516 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.972639 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.973018 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:13 crc kubenswrapper[4763]: I0131 14:55:13.990789 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.009239 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.011270 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:37:59.092056569 +0000 UTC Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.028425 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.054174 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5718617532da891221042050317ce1d791697f5c1fea81c378506c5f53bf845a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:11Z\\\",\\\"message\\\":\\\"11.085355 6004 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.085548 6004 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:55:11.086307 6004 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:55:11.086336 6004 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:55:11.086359 6004 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:11.086367 6004 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:11.086392 6004 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:11.086437 6004 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:11.086444 6004 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:11.086485 6004 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:11.086536 6004 factory.go:656] Stopping watch factory\\\\nI0131 14:55:11.086560 6004 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:11.086576 6004 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:11.086588 6004 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:11.086600 6004 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064374 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.064398 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074017 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074117 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.074216 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.075361 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.075764 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/07280471-d907-4c1f-a38f-9337ecb04b43-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.080975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/07280471-d907-4c1f-a38f-9337ecb04b43-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.094396 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.097937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzsm9\" (UniqueName: \"kubernetes.io/projected/07280471-d907-4c1f-a38f-9337ecb04b43-kube-api-access-pzsm9\") pod \"ovnkube-control-plane-749d76644c-8lmbv\" (UID: \"07280471-d907-4c1f-a38f-9337ecb04b43\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.121645 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.138084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.156722 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167373 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.167681 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.169726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.174281 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: W0131 14:55:14.190971 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07280471_d907_4c1f_a38f_9337ecb04b43.slice/crio-dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637 WatchSource:0}: Error finding container dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637: Status 404 returned error can't find the container with id dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637 Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.192478 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.209416 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270844 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.270856 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.341942 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"dc781ab56e38f793d59ad90cfa5d42a04a314aa09889fcbf1ee9e5dda77b2637"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.343226 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.346540 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:14 crc kubenswrapper[4763]: E0131 14:55:14.346713 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.358759 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.368270 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.372927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373142 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.373286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.392568 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.451246 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.473507 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.475993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.476011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.476026 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.486910 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.500791 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.518002 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.538138 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.552241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.571643 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578855 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578883 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.578896 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.585581 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.605189 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.619328 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.635625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.649423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.681315 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.783963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.784080 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.886979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.887839 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.951913 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.952647 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:14 crc kubenswrapper[4763]: E0131 14:55:14.952783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.975156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.988156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989799 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.989826 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:14Z","lastTransitionTime":"2026-01-31T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:14 crc kubenswrapper[4763]: I0131 14:55:14.998782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.008815 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.011412 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:37:34.044049437 +0000 UTC Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.020919 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.030898 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.041989 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.042041 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.041860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.042113 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.053031 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.064198 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.074550 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.083950 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.084014 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.085076 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091620 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091631 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.091653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.097360 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.121461 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.133872 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.148009 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.159901 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.172814 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.184585 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.184633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.184788 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.184837 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:15.684821036 +0000 UTC m=+35.439559329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.185294 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.194983 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.195000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.207891 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlvp4\" (UniqueName: \"kubernetes.io/projected/84302428-88e1-47ba-84cc-7d12472f9aa2-kube-api-access-mlvp4\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297712 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297784 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.297796 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.351128 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.351180 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" event={"ID":"07280471-d907-4c1f-a38f-9337ecb04b43","Type":"ContainerStarted","Data":"b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.379116 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402218 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.402291 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.407597 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.427918 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.445676 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.459625 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.493796 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.510770 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.510971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.511220 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.527229 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.541966 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.555601 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.569502 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.585333 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.602537 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614633 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614667 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.614730 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.622123 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.641241 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.665885 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.683880 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.690295 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.690424 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.690478 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:16.690463347 +0000 UTC m=+36.445201650 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717505 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.717613 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.791437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.791608 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.791575527 +0000 UTC m=+51.546313860 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.791802 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792023 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792083 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792108 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792124 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.792031 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792180 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.792157783 +0000 UTC m=+51.546896126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792231 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.792214344 +0000 UTC m=+51.546952777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.792271 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792405 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.792477 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.79245309 +0000 UTC m=+51.547191423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820746 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.820944 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.893230 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893535 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893604 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893628 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: E0131 14:55:15.893761 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:31.893735154 +0000 UTC m=+51.648473477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.923966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:15 crc kubenswrapper[4763]: I0131 14:55:15.924620 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:15Z","lastTransitionTime":"2026-01-31T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.011891 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:40:24.276583605 +0000 UTC Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.027565 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.130291 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.232989 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233041 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233052 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.233082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.336500 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.439592 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.543312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.646186 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.703290 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.703461 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.703577 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:18.703554688 +0000 UTC m=+38.458292991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733768 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733791 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.733801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.753849 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758905 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.758950 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.779163 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784235 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.784342 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.804194 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.809912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.809975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.810085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.833274 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.838395 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.854588 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:16 crc kubenswrapper[4763]: E0131 14:55:16.854826 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857345 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.857480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.858068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.858131 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.961895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962014 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962039 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962070 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:16 crc kubenswrapper[4763]: I0131 14:55:16.962097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:16Z","lastTransitionTime":"2026-01-31T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.013022 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:35:54.43267019 +0000 UTC Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.041778 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.041917 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.042000 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.042109 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042150 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042283 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042453 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:17 crc kubenswrapper[4763]: E0131 14:55:17.042529 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065600 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.065731 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169946 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.169991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.170010 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274557 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.274665 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378362 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.378403 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.481904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482120 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.482138 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.585481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.688862 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792509 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.792570 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.895339 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.997982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998034 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:17 crc kubenswrapper[4763]: I0131 14:55:17.998088 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:17Z","lastTransitionTime":"2026-01-31T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.013435 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 01:41:46.202879749 +0000 UTC Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100930 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.100949 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204285 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204330 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.204347 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.307416 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411326 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411379 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.411456 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514192 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514312 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.514405 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.618323 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.722171 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.725886 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:18 crc kubenswrapper[4763]: E0131 14:55:18.726080 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:18 crc kubenswrapper[4763]: E0131 14:55:18.726183 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:22.726157182 +0000 UTC m=+42.480895515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.824961 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:18 crc kubenswrapper[4763]: I0131 14:55:18.928992 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:18Z","lastTransitionTime":"2026-01-31T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.013784 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:12:21.830993003 +0000 UTC Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.031181 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041725 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041729 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.041890 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.041990 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.042096 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042239 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042332 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:19 crc kubenswrapper[4763]: E0131 14:55:19.042393 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133843 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.133950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.134009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.134028 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.237599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.340904 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444401 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444500 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.444520 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547589 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.547667 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651115 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.651177 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754975 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.754998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.755024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.755041 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857723 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.857749 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.960986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:19 crc kubenswrapper[4763]: I0131 14:55:19.961093 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:19Z","lastTransitionTime":"2026-01-31T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.014647 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:24:05.754594385 +0000 UTC Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.064669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.167927 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.270945 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373122 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373189 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.373264 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476446 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476497 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476513 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.476553 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579487 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.579542 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683293 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.683313 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.786480 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.889959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.890130 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993098 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:20 crc kubenswrapper[4763]: I0131 14:55:20.993164 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:20Z","lastTransitionTime":"2026-01-31T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.015865 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:16:01.214346685 +0000 UTC Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041430 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041528 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.041475 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041722 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041808 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041901 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:21 crc kubenswrapper[4763]: E0131 14:55:21.041958 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.059256 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.079819 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096347 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096375 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.096471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.113554 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.129574 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.147757 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.164767 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.180185 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.194987 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.198965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199041 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.199085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.220415 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.235596 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.260576 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.287793 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302451 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.302495 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.304056 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.318433 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.333156 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.347577 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:21Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404921 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.404993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.405010 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.509917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510025 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.510111 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.613414 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.716718 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.819919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.819990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820042 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.820089 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:21 crc kubenswrapper[4763]: I0131 14:55:21.922641 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:21Z","lastTransitionTime":"2026-01-31T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.016319 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:58:56.998646201 +0000 UTC Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025293 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.025314 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128664 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.128681 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231383 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.231499 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334154 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.334303 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.437236 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.540444 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.643301 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.746886 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.769500 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:22 crc kubenswrapper[4763]: E0131 14:55:22.769768 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:22 crc kubenswrapper[4763]: E0131 14:55:22.769852 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:30.769830892 +0000 UTC m=+50.524569215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849722 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.849785 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.953981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:22 crc kubenswrapper[4763]: I0131 14:55:22.954000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:22Z","lastTransitionTime":"2026-01-31T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.016854 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:20:52.643231486 +0000 UTC Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041423 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.041585 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041603 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.041742 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042136 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042278 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:23 crc kubenswrapper[4763]: E0131 14:55:23.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.057508 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.160827 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263747 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.263887 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367200 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.367329 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470778 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470787 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.470810 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573309 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.573449 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.676394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779592 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.779651 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.883680 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987149 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:23 crc kubenswrapper[4763]: I0131 14:55:23.987191 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:23Z","lastTransitionTime":"2026-01-31T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.017272 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:27:10.899969901 +0000 UTC Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090029 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.090275 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.194923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299107 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299177 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.299237 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.401674 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.504985 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.505003 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.607836 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712525 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712550 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.712568 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.816469 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919930 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:24 crc kubenswrapper[4763]: I0131 14:55:24.919975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:24Z","lastTransitionTime":"2026-01-31T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.018209 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:24:39.015603697 +0000 UTC Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023106 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.023154 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041790 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041823 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.041852 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042057 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.042117 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042248 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042369 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:25 crc kubenswrapper[4763]: E0131 14:55:25.042449 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.127332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231688 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.231779 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335053 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335149 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335177 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.335194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.438844 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541825 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.541847 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644588 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.644765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748375 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.748418 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851171 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851306 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.851329 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954509 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:25 crc kubenswrapper[4763]: I0131 14:55:25.954573 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:25Z","lastTransitionTime":"2026-01-31T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.018825 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:03:14.739357329 +0000 UTC Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.042318 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057275 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.057431 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160618 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160653 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.160669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264446 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.264459 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367718 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367743 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.367759 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.396027 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.400281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.400500 4763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.424942 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.444185 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.463271 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.470570 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.488986 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.509153 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.531984 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.553603 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.568758 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573347 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.573432 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.582419 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.603730 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.614372 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.624683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.641051 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.657470 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.694295 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696082 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.696115 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.714144 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.725358 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798946 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.798998 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.901481 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:26Z","lastTransitionTime":"2026-01-31T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:26 crc kubenswrapper[4763]: I0131 14:55:26.989026 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004340 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004358 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.004401 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.006246 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.019812 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:59:18.069463573 +0000 UTC Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.031811 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036442 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036545 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.036562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041529 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041595 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041644 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041683 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.041756 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041788 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.041920 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.042145 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.055481 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.061364 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.073323 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.076750 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.088201 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.092614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.104641 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.104770 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.106260 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.209196 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312661 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312685 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.312772 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.407314 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.408480 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/1.log" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.415448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.415846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.416511 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417403 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" exitCode=1 Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417452 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.417489 4763 scope.go:117] "RemoveContainer" containerID="307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.418567 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:27 crc kubenswrapper[4763]: E0131 14:55:27.418835 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.446164 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.466202 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.482594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.497547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519614 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519622 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.519643 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.520665 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.535090 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.567450 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.585787 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.604108 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.621051 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.623178 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.639264 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.672315 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307c6f3d2186ef67dd7dec1e1c0e6771c079c4e31746a8e4172034a8b2177518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:12Z\\\",\\\"message\\\":\\\"ved *v1.Node event handler 2\\\\nI0131 14:55:12.246342 6167 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:12.246366 6167 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:12.246614 6167 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246653 6167 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:55:12.246751 6167 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:12.246781 6167 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:12.246844 6167 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:55:12.246855 6167 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0131 14:55:12.246869 6167 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0131 14:55:12.246911 6167 factory.go:656] Stopping watch factory\\\\nI0131 14:55:12.246936 6167 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:12.246949 6167 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:12.246966 6167 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:12.246968 6167 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0131 14:55:12.246988 6167 handler.go:208] Removed *v1.Pod event handler 6\\\\nF0131 14:55:12.247078 6167 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.693204 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.712302 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726243 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.726307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.732950 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.751896 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.767972 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:27Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.829414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.829852 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830022 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.830384 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.933641 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934222 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:27 crc kubenswrapper[4763]: I0131 14:55:27.934514 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:27Z","lastTransitionTime":"2026-01-31T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.020606 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:21:19.402261649 +0000 UTC Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038308 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038411 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.038428 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141111 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.141903 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.142097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244850 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.244975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.347426 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.422428 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.426385 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:28 crc kubenswrapper[4763]: E0131 14:55:28.426708 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.440977 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.449988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.456782 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.466730 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.474715 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.483534 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.505366 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.537393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552820 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.552958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.553105 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.553261 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.559851 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.575069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.587624 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.603753 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.621900 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.638005 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.655237 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656555 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.656623 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.670738 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.684259 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.697288 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759178 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.759219 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862478 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.862585 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965285 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965413 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:28 crc kubenswrapper[4763]: I0131 14:55:28.965430 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:28Z","lastTransitionTime":"2026-01-31T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.022509 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:36:10.071461327 +0000 UTC Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041032 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041227 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041127 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.041078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.041733 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.041927 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.042138 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:29 crc kubenswrapper[4763]: E0131 14:55:29.042304 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068281 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068379 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.068404 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171175 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.171237 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.274503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377821 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.377846 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480721 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480734 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.480764 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584501 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.584540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687598 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.687614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.790511 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893522 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893553 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.893565 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.948066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.963916 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.971068 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.986881 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997837 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:29 crc kubenswrapper[4763]: I0131 14:55:29.997923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:29Z","lastTransitionTime":"2026-01-31T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.010683 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.023646 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:02:07.918497025 +0000 UTC Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.030467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.042464 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.058124 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.086660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.100934 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.120936 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.142078 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.159030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.173797 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.191013 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203467 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.203597 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.219976 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.237662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.257311 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.273378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.306660 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307517 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.307566 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411180 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.411221 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514187 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.514231 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616945 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.616967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.720176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823191 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.823230 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.867443 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:30 crc kubenswrapper[4763]: E0131 14:55:30.867631 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:30 crc kubenswrapper[4763]: E0131 14:55:30.867725 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:55:46.867676212 +0000 UTC m=+66.622414535 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926367 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:30 crc kubenswrapper[4763]: I0131 14:55:30.926429 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:30Z","lastTransitionTime":"2026-01-31T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.024483 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:51:33.24137605 +0000 UTC Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030221 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030277 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.030347 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041391 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041455 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041517 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041576 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041682 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.041761 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041881 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.041980 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.062579 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.083078 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.105091 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.128900 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.133791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.152038 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.171590 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.186410 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.203124 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.220378 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.236746 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.241017 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.259743 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.273882 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.287855 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.319141 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.332362 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.339876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.339990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340023 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340061 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.340100 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.361033 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.376268 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.396588 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:31Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442723 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.442764 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.546525 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.649953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.650148 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754528 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.754562 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857570 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857628 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.857690 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876169 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876308 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876328 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876297932 +0000 UTC m=+83.631036265 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876414 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.876455 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876505 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876549 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876568 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876596 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876654 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876629352 +0000 UTC m=+83.631367675 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876653 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876685 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876672463 +0000 UTC m=+83.631410796 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.876842 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.876811947 +0000 UTC m=+83.631550280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961152 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.961242 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:31Z","lastTransitionTime":"2026-01-31T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:31 crc kubenswrapper[4763]: I0131 14:55:31.978033 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978228 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978270 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978290 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:31 crc kubenswrapper[4763]: E0131 14:55:31.978372 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:03.978343707 +0000 UTC m=+83.733082030 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.025624 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:17:24.84727606 +0000 UTC Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.064994 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.065016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.065032 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167732 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167791 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.167812 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.270631 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.373897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.374728 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.478096 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.581415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.684854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.685084 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787593 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.787614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890344 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.890406 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993787 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:32 crc kubenswrapper[4763]: I0131 14:55:32.993826 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:32Z","lastTransitionTime":"2026-01-31T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.026352 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:23:34.486782094 +0000 UTC Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040772 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.041659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040957 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.040910 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042262 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042403 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:33 crc kubenswrapper[4763]: E0131 14:55:33.042532 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.096648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.097004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.097070 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201857 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.201934 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305648 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305667 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305692 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.305738 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408939 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.408957 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.512984 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616531 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616569 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.616586 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720259 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.720318 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.823318 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926757 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926775 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:33 crc kubenswrapper[4763]: I0131 14:55:33.926818 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:33Z","lastTransitionTime":"2026-01-31T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.027315 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:07:08.655272256 +0000 UTC Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.029599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133575 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.133838 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236910 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236955 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.236973 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340479 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.340526 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443498 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443611 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.443629 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.546506 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649733 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649755 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.649765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.752997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.753028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.753048 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856515 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.856531 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959906 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959925 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:34 crc kubenswrapper[4763]: I0131 14:55:34.959935 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:34Z","lastTransitionTime":"2026-01-31T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.028159 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:13:26.981004343 +0000 UTC Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041629 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041733 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.041874 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.041903 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.042028 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042031 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042132 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:35 crc kubenswrapper[4763]: E0131 14:55:35.042273 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062792 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062814 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.062831 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165488 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165508 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.165515 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273889 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.273980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.274004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.274031 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.377295 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.480420 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.583614 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.686953 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.790289 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.892941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.893076 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.995971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:35 crc kubenswrapper[4763]: I0131 14:55:35.996110 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:35Z","lastTransitionTime":"2026-01-31T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.028793 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:43:23.15623109 +0000 UTC Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099064 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.099183 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.202248 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304900 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304951 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.304971 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409831 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.409879 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.514967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515025 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515042 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515066 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.515083 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618633 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618675 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.618743 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722396 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.722441 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825819 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825871 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.825891 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928731 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928781 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:36 crc kubenswrapper[4763]: I0131 14:55:36.928868 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:36Z","lastTransitionTime":"2026-01-31T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.029400 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:28:01.027676508 +0000 UTC Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031298 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031338 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.031355 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.041969 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042337 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042541 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.042608 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042746 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.042863 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.134360 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220133 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220233 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.220250 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.239084 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243908 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.243967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.270958 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276478 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.276538 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.297731 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.302982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.303004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.303021 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.320956 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.325661 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.342671 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:37Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:37 crc kubenswrapper[4763]: E0131 14:55:37.342931 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345714 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.345743 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448718 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.448783 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551275 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.551319 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.654870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.654995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655018 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.655069 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.757927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.757992 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758034 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.758051 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.860921 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.860988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861011 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.861085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963888 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:37 crc kubenswrapper[4763]: I0131 14:55:37.963935 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:37Z","lastTransitionTime":"2026-01-31T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.030376 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:07:46.975664357 +0000 UTC Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.066529 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169300 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169317 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.169354 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271479 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.271489 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373613 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373653 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.373670 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.476853 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.476917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477004 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.477063 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580600 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580615 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.580651 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684290 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.684307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.787467 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890100 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890115 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.890125 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.994916 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:38 crc kubenswrapper[4763]: I0131 14:55:38.995410 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:38Z","lastTransitionTime":"2026-01-31T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.031407 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:44:04.662399573 +0000 UTC Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041124 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041263 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041362 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041437 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041588 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.041663 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041812 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:39 crc kubenswrapper[4763]: E0131 14:55:39.041984 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098186 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098239 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.098263 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.201566 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304541 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304590 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.304617 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.406959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.407073 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510183 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510320 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.510929 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614642 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614686 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614740 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.614779 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717524 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.717544 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820361 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.820370 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923526 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:39 crc kubenswrapper[4763]: I0131 14:55:39.923600 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:39Z","lastTransitionTime":"2026-01-31T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.026721 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.032054 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:01:21.922183479 +0000 UTC Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.131394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234422 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.234445 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.336995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.337014 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439903 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.439919 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542334 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.542501 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.646604 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749852 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.749867 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.852961 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853053 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853082 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.853102 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956743 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:40 crc kubenswrapper[4763]: I0131 14:55:40.956760 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:40Z","lastTransitionTime":"2026-01-31T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.032759 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:03:37.898908558 +0000 UTC Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041546 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.041766 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.041751 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.041852 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.042008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.042058 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:41 crc kubenswrapper[4763]: E0131 14:55:41.042153 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059535 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059583 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.059604 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.077908 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.095168 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.115481 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.130514 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.146670 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.161979 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.169969 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.188594 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.205712 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.221583 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.233882 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.246392 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.260151 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264859 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264917 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.264930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.274413 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.287828 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.302813 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.315139 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.329825 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.343967 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:41Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367228 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.367273 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469858 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469885 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.469903 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573323 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573341 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573366 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.573384 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677420 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677475 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.677498 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780035 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780139 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.780156 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883840 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883869 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.883891 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986845 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.986948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.987022 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:41 crc kubenswrapper[4763]: I0131 14:55:41.987092 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:41Z","lastTransitionTime":"2026-01-31T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.033311 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 21:56:28.829325571 +0000 UTC Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090265 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090306 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.090332 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193185 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.193795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.194006 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.194194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297344 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.297464 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399856 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399873 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.399886 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502923 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.502967 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606528 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.606585 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709143 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.709176 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812056 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812110 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812129 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812151 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.812168 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914835 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914886 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:42 crc kubenswrapper[4763]: I0131 14:55:42.914904 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:42Z","lastTransitionTime":"2026-01-31T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.017978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018079 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018108 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.018128 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.033789 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:27:36.902172942 +0000 UTC Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041300 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041381 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041773 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.041798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042084 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042129 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042231 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.042325 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.042990 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:43 crc kubenswrapper[4763]: E0131 14:55:43.043324 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121537 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121588 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.121648 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225539 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225627 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.225678 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328345 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.328482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431869 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.431930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534727 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534868 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534901 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.534955 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639146 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.639192 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742409 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742532 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.742589 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846335 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846400 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846421 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.846439 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950130 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950200 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:43 crc kubenswrapper[4763]: I0131 14:55:43.950240 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:43Z","lastTransitionTime":"2026-01-31T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.034801 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:47:55.8535442 +0000 UTC Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.053855 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157290 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157933 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.157993 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260424 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.260459 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362931 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.362988 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465672 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465726 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.465760 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568517 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568530 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.568559 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.671965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672005 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.672039 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775017 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775090 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775113 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775142 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.775163 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878354 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.878493 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981191 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:44 crc kubenswrapper[4763]: I0131 14:55:44.981241 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:44Z","lastTransitionTime":"2026-01-31T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.035210 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:32:22.933483628 +0000 UTC Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041673 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041813 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041930 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.041937 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.041994 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042062 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042175 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:45 crc kubenswrapper[4763]: E0131 14:55:45.042250 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.084482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187381 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.187487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290189 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290252 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290294 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.290311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392803 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392834 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.392846 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.495510 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597898 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597967 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.597992 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.598012 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700844 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700927 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700949 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.700965 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803425 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803454 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.803477 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.905925 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906080 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:45 crc kubenswrapper[4763]: I0131 14:55:45.906104 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:45Z","lastTransitionTime":"2026-01-31T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008792 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.008922 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.035592 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:59:23.576747983 +0000 UTC Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.111223 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.213340 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315796 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.315821 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419233 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.419937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523283 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523294 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523309 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.523320 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627894 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627974 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.627996 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.628013 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730444 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730458 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.730467 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832973 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832990 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.832998 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935244 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.935286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:46Z","lastTransitionTime":"2026-01-31T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:46 crc kubenswrapper[4763]: I0131 14:55:46.944906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:46 crc kubenswrapper[4763]: E0131 14:55:46.945105 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:46 crc kubenswrapper[4763]: E0131 14:55:46.945253 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:56:18.945188605 +0000 UTC m=+98.699926938 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.035953 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:40:55.36656374 +0000 UTC Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038165 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038209 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.038227 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041430 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041515 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.041642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.041634 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.041848 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.042083 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.042174 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.142382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244491 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244573 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.244610 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347439 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.347458 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.449415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551296 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.551361 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653519 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.653597 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698550 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698576 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.698584 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.717441 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721858 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721873 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.721883 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.739737 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743367 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743431 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.743494 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.763183 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767270 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767284 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.767312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.783306 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787215 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.787252 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.809002 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:47Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:47 crc kubenswrapper[4763]: E0131 14:55:47.809160 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810540 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810571 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810582 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.810612 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913113 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913157 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:47 crc kubenswrapper[4763]: I0131 14:55:47.913216 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:47Z","lastTransitionTime":"2026-01-31T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.015769 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.036529 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:03:00.341248438 +0000 UTC Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.118956 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.119069 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221162 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221224 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.221268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323810 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.323853 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.426533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496741 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496823 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" exitCode=1 Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.496857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.497257 4763 scope.go:117] "RemoveContainer" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.513177 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.525306 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529830 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.529850 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.538238 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.552598 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.572748 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.588979 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.620959 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.632936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.633081 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.646327 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.667223 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.685163 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.700771 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.716585 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.726417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735954 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.735999 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.740356 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.755906 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.769826 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.786486 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.797397 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:48Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.838682 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:48 crc kubenswrapper[4763]: I0131 14:55:48.941082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:48Z","lastTransitionTime":"2026-01-31T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.036799 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:02:47.456049567 +0000 UTC Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.041239 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.041459 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.041893 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042034 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.042202 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.042200 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042362 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:49 crc kubenswrapper[4763]: E0131 14:55:49.042485 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043764 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043780 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.043791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146401 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146457 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146474 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.146487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248637 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.248867 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351094 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.351147 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454223 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.454273 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.501985 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.502040 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.516171 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.536660 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.557380 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.574348 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.590102 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.606512 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.621557 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.638126 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.650800 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660075 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660147 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660170 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660196 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.660215 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.665149 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.695214 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.711058 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.743393 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763222 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763360 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763384 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.763394 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.781602 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.798968 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.815587 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.836131 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.865276 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.967926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968397 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:49 crc kubenswrapper[4763]: I0131 14:55:49.968482 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:49Z","lastTransitionTime":"2026-01-31T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.037816 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:35:10.391208403 +0000 UTC Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.070969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.071085 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174143 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174183 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.174226 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276365 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276378 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.276389 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378570 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378616 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378626 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378642 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.378655 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480685 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480738 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480746 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.480769 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583604 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583691 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.583721 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685874 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685899 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.685913 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789118 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.789136 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892065 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892089 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.892099 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995623 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995638 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995690 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:50 crc kubenswrapper[4763]: I0131 14:55:50.995796 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:50Z","lastTransitionTime":"2026-01-31T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.037952 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:01:00.167466984 +0000 UTC Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.041553 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.041659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.041861 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.041951 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.042112 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.042186 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.042429 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:51 crc kubenswrapper[4763]: E0131 14:55:51.042508 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.059125 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.075915 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.089840 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098336 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.098411 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.101450 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.117041 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.130207 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.142893 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.160998 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.172417 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.191479 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200406 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200443 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200472 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.200486 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.203067 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.225482 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.244561 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.262030 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.275553 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.289438 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303159 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303175 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.303186 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.307644 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.320811 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.405539 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507073 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.507113 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610455 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610465 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610483 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.610497 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712538 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.712561 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815018 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.815082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917511 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917584 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917607 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:51 crc kubenswrapper[4763]: I0131 14:55:51.917657 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:51Z","lastTransitionTime":"2026-01-31T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020231 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020297 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.020356 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.038797 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:27:33.13352865 +0000 UTC Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123486 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.123522 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225897 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225951 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225970 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.225983 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328502 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328561 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328578 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.328591 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431731 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.431801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533948 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.533997 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.534009 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636184 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636201 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.636243 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738552 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738576 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.738595 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.840978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841342 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.841368 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943770 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:52 crc kubenswrapper[4763]: I0131 14:55:52.943848 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:52Z","lastTransitionTime":"2026-01-31T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.039262 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:34:40.541969388 +0000 UTC Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.041739 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.041910 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042187 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.042294 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042500 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.042593 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.042969 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:53 crc kubenswrapper[4763]: E0131 14:55:53.043073 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045851 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045898 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.045923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148774 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148880 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.148892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251938 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.251983 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354793 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354871 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.354884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458136 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458155 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.458166 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560579 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.560620 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663322 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.663452 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765958 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.765989 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869021 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869092 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869110 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.869155 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971547 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971618 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:53 crc kubenswrapper[4763]: I0131 14:55:53.971729 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:53Z","lastTransitionTime":"2026-01-31T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.040030 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:39:33.917368299 +0000 UTC Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074160 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.074197 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.177981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178027 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.178060 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281793 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281860 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.281923 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385388 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.385443 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.488483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591880 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.591966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.592003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.592018 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.694349 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797355 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.797407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901101 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:54 crc kubenswrapper[4763]: I0131 14:55:54.901136 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:54Z","lastTransitionTime":"2026-01-31T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003944 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003966 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.003981 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.040766 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:18:43.059042521 +0000 UTC Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.040943 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041061 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041204 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041238 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.041286 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041412 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041440 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:55 crc kubenswrapper[4763]: E0131 14:55:55.041545 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107239 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.107276 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.210936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.211093 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314526 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314558 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.314579 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.417943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418036 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.418079 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521179 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.521233 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624257 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624268 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624302 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.624312 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726364 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726464 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.726479 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828733 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.828765 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931356 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931386 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:55 crc kubenswrapper[4763]: I0131 14:55:55.931407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:55Z","lastTransitionTime":"2026-01-31T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035435 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.035451 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.041967 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:00:57.817626118 +0000 UTC Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139807 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139896 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139964 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.139987 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244234 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244291 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244330 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.244345 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347394 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347442 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.347460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.450487 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.553964 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554107 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.554132 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.656920 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761675 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.761757 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.865982 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969513 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969593 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:56 crc kubenswrapper[4763]: I0131 14:55:56.969654 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:56Z","lastTransitionTime":"2026-01-31T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041755 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041805 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.041911 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.041938 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.042027 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.043004 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.042756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.042195 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:47:41.268492047 +0000 UTC Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.043184 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.043257 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096385 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096410 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.096430 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203855 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203931 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203955 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.203973 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306567 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306603 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306614 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.306642 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409548 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.409602 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512274 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512305 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512314 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.512337 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.533083 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.536442 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.537057 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.563158 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.590574 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.610136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614241 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614256 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.614268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.625125 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.636587 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.647780 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.657853 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.665740 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.673614 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.686135 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.701674 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.712467 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715610 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715658 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.715670 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.731532 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.744778 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.756573 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.765714 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.774420 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.792115 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817562 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.817588 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.835807 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.847390 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850337 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850369 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850380 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.850406 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.860974 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863196 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863205 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.863225 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.873075 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876023 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876077 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.876086 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.885259 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888153 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888207 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.888231 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.908842 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:57 crc kubenswrapper[4763]: E0131 14:55:57.909089 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920650 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920781 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:57 crc kubenswrapper[4763]: I0131 14:55:57.920801 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:57Z","lastTransitionTime":"2026-01-31T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022904 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.022976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.023001 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.023017 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.043474 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:35:03.728064693 +0000 UTC Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125574 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.125594 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227932 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.227988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.228009 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330845 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330886 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.330902 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433192 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433279 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433300 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.433320 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.535918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.541582 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.542163 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/2.log" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545746 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" exitCode=1 Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545810 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.545874 4763 scope.go:117] "RemoveContainer" containerID="e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.546930 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:55:58 crc kubenswrapper[4763]: E0131 14:55:58.547251 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.570843 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.587340 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.601411 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.616580 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.633809 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638682 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638736 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.638786 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.647662 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.659262 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.683408 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.698659 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.712877 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.724977 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.739238 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741350 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741398 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.741427 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.762122 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e41ef8b743225b901fc59f6154c26f029edb3e324ecdf5102fe9f2332c3dfb25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:27Z\\\",\\\"message\\\":\\\":190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:55:27.057627 6365 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:55:27.057670 6365 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:55:27.057650 6365 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:55:27.057676 6365 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0131 14:55:27.057685 6365 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:55:27.057715 6365 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0131 14:55:27.057680 6365 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:55:27.057645 6365 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:55:27.057741 6365 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:55:27.057774 6365 factory.go:656] Stopping watch factory\\\\nI0131 14:55:27.057790 6365 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:55:27.058116 6365 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0131 14:55:27.058250 6365 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0131 14:55:27.058320 6365 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:55:27.058365 6365 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0131 14:55:27.058474 6365 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.777036 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.796079 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.811095 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.826414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.843466 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844609 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844730 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844764 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.844790 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.947965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948010 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:58 crc kubenswrapper[4763]: I0131 14:55:58.948068 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:58Z","lastTransitionTime":"2026-01-31T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.040996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041032 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041198 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041196 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041327 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041429 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.041516 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.041605 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.043857 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:03:57.115355959 +0000 UTC Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.050685 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153462 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.153503 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257665 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257809 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.257825 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.360995 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.361019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.361037 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465832 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465850 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465875 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.465892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.553893 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.561314 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:55:59 crc kubenswrapper[4763]: E0131 14:55:59.561785 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568646 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.568897 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.582430 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.597623 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.645984 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671870 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.671908 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.672521 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.686454 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.700416 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.720475 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.737091 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.772932 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.774983 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.775007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.775024 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.797451 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.818190 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.835296 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.855198 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878157 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878226 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878248 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878280 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878305 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.878423 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.900354 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.924021 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.950209 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.967237 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982112 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982194 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:55:59 crc kubenswrapper[4763]: I0131 14:55:59.982272 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:55:59Z","lastTransitionTime":"2026-01-31T14:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.044923 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:55:14.253359862 +0000 UTC Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085351 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.085369 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188204 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188230 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188258 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.188279 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290827 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290848 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290874 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.290892 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393393 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.393444 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.496924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.496986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497007 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.497081 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600014 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600096 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600119 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.600137 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.703924 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704093 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.704139 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807448 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.807490 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910101 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910193 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:00 crc kubenswrapper[4763]: I0131 14:56:00.910204 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:00Z","lastTransitionTime":"2026-01-31T14:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012663 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.012781 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.041442 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041510 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.041563 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.041967 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.042169 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.042551 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:01 crc kubenswrapper[4763]: E0131 14:56:01.042741 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.045299 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:12:39.099018448 +0000 UTC Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.060681 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.081206 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.099357 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115937 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.115952 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.119146 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.140907 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.158762 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.180289 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.202069 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218982 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.218999 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.219028 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.219045 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.225414 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.243332 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.259531 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.279547 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.298902 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.315015 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322311 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322374 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.322415 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.332915 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.366136 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.385957 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.419471 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425607 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425630 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.425756 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.527934 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528265 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.528307 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630403 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630469 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.630480 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733181 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.733669 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836679 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836784 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.836802 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.940984 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941414 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.941854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:01 crc kubenswrapper[4763]: I0131 14:56:01.942047 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:01Z","lastTransitionTime":"2026-01-31T14:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045635 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045722 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045508 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:59:39.5736501 +0000 UTC Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.045884 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.069261 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149195 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149617 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.149854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.150019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.150159 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253059 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253348 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253432 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.253599 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356806 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.356862 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.460546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461346 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461660 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.461827 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564389 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564481 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.564533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667005 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667046 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667066 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667094 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.667109 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769404 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769441 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.769478 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.871974 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872078 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.872095 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.974981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.975008 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:02 crc kubenswrapper[4763]: I0131 14:56:02.975025 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:02Z","lastTransitionTime":"2026-01-31T14:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.040996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041166 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041262 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041287 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.041327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041470 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.041525 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.046000 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:47:34.476241421 +0000 UTC Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077678 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077808 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.077858 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180556 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180625 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180647 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.180724 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283914 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283968 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.283986 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.387551 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388156 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388310 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.388576 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491249 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491262 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491278 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.491289 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594592 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.594613 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697445 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697504 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697546 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.697567 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800373 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800440 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800466 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.800483 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903666 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903678 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.903728 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:03Z","lastTransitionTime":"2026-01-31T14:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.970116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.970317 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.970287971 +0000 UTC m=+147.725026274 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.970783 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.971004 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: I0131 14:56:03.971182 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971024 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971517 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971663 4763 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971898 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.971873545 +0000 UTC m=+147.726611868 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971081 4763 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.972212 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.972195365 +0000 UTC m=+147.726933688 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.971333 4763 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:56:03 crc kubenswrapper[4763]: E0131 14:56:03.972506 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.972491403 +0000 UTC m=+147.727229726 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006333 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006391 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006407 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.006443 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.046861 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:17:24.409298449 +0000 UTC Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.072764 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.072979 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073023 4763 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073043 4763 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:04 crc kubenswrapper[4763]: E0131 14:56:04.073124 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.073100802 +0000 UTC m=+147.827839135 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109560 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109606 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109621 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.109631 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212795 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212836 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.212877 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316173 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316238 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.316311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.419960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420020 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.420084 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.523447 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524548 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.524568 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627295 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.627854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.628026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.628209 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733203 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733372 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733402 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.733475 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836453 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836655 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836866 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.836957 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940044 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:04 crc kubenswrapper[4763]: I0131 14:56:04.940200 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:04Z","lastTransitionTime":"2026-01-31T14:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041689 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.041930 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041952 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.041753 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042374 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.042460 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042532 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:05 crc kubenswrapper[4763]: E0131 14:56:05.042812 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044026 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044058 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044071 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.044082 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.047393 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 07:51:12.704965482 +0000 UTC Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147572 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147639 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147657 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147683 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.147744 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250477 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.250540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353790 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.353836 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.456814 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559893 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559911 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.559922 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663292 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663321 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.663345 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766640 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766727 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766748 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766773 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.766791 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870370 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870480 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.870524 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973456 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973521 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973539 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:05 crc kubenswrapper[4763]: I0131 14:56:05.973582 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:05Z","lastTransitionTime":"2026-01-31T14:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.048207 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:37:33.283191735 +0000 UTC Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076909 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076980 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.076998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.077024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.077042 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179495 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179512 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179536 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.179555 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282304 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282343 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282460 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.282493 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385155 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385229 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385255 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.385275 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488842 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.488905 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591912 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591969 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591988 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.591999 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694782 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694861 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.694937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797766 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797816 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.797835 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901214 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901286 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901301 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:06 crc kubenswrapper[4763]: I0131 14:56:06.901342 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:06Z","lastTransitionTime":"2026-01-31T14:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004015 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004081 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004102 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.004145 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041782 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041882 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041880 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.041954 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.041996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042238 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042335 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.042458 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.048767 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:46:52.069127847 +0000 UTC Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107169 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107223 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107240 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107266 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.107285 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210377 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.210492 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313894 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.313930 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417062 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417128 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417180 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.417202 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520112 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520144 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.520166 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622725 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622798 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622828 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.622850 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726506 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726587 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726611 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.726629 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829072 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829123 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829140 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829161 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.829178 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918174 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918190 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918212 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.918228 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.936684 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941471 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941568 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.941584 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.961199 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966825 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966841 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966863 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.966880 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:07 crc kubenswrapper[4763]: E0131 14:56:07.987458 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994459 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994565 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994649 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:07 crc kubenswrapper[4763]: I0131 14:56:07.994668 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:07Z","lastTransitionTime":"2026-01-31T14:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.016538 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022545 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022643 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.022729 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.044647 4763 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:56:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b7852931-3d3a-417c-b1dc-4eae70947913\\\",\\\"systemUUID\\\":\\\"dae69c69-4f41-4a04-af59-12d21fa5088f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:08 crc kubenswrapper[4763]: E0131 14:56:08.044917 4763 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047636 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047744 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047762 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047789 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.047837 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.049794 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:13:43.061630106 +0000 UTC Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150785 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150935 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.150962 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.151012 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254559 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254624 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254647 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254676 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.254732 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358114 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358510 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.358551 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462357 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462428 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462564 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.462588 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565674 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565741 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565769 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.565788 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668881 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668943 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668957 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668981 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.668996 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772254 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772299 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.772326 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875634 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875747 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875771 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.875816 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978219 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978272 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:08 crc kubenswrapper[4763]: I0131 14:56:08.978300 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:08Z","lastTransitionTime":"2026-01-31T14:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.041356 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.041553 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.041563 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042257 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.042351 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.042392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042805 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:09 crc kubenswrapper[4763]: E0131 14:56:09.042926 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.050165 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:26:38.312271457 +0000 UTC Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081430 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081489 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081505 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.081546 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186269 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186433 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186452 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.186469 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289728 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289753 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289760 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.289782 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.392739 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393033 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393125 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393225 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.393407 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496387 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496427 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496450 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.496460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599084 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.599128 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702514 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702599 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702632 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.702653 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805419 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805484 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805501 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805525 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.805542 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.907926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.907991 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908030 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:09 crc kubenswrapper[4763]: I0131 14:56:09.908091 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:09Z","lastTransitionTime":"2026-01-31T14:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.009986 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010032 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010040 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010054 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.010062 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.051056 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:40:32.428995269 +0000 UTC Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113140 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113213 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113237 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.113255 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216368 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216424 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216434 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216449 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.216460 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319417 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319482 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319499 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319523 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.319540 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421826 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421875 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421890 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421936 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.421991 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525199 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525261 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525282 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525307 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.525323 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629198 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.629258 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733043 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733117 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733135 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.733190 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836250 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836332 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836350 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836383 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.836401 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939010 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939076 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939127 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:10 crc kubenswrapper[4763]: I0131 14:56:10.939150 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:10Z","lastTransitionTime":"2026-01-31T14:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040922 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040987 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.040913 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041071 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.041106 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041255 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041338 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:11 crc kubenswrapper[4763]: E0131 14:56:11.041433 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042953 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.042979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.043009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.043034 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.051188 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:07:45.575408207 +0000 UTC Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.065569 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"72708d98-90a0-4456-bdbb-6ccdf80bd45f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.085498 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qzkhg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2335d04f-10b2-4cf8-aae6-236650539c74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:48Z\\\",\\\"message\\\":\\\"2026-01-31T14:55:02+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6\\\\n2026-01-31T14:55:02+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_313e889b-454e-47d4-acdf-ae13f21667c6 to /host/opt/cni/bin/\\\\n2026-01-31T14:55:03Z [verbose] multus-daemon started\\\\n2026-01-31T14:55:03Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:55:48Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zh4pm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qzkhg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.104904 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-26pm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84302428-88e1-47ba-84cc-7d12472f9aa2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mlvp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-26pm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.118775 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a87dec4-20df-4b46-878a-2fd4e60feedd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd4c38511fece2af6df3bb93ecff7c793bbf4320c7b78e9996fa88a8775d2752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b969d8a316d58e6e57d70e05ba1213b54e8ce8ddb87cbdc9f387758d2d63ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.135985 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-qcb97" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69a7de9b-f4a3-408b-8b12-570db6fcd84f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://874d156f8239dcd4d48af107645b299717607aebbe983fb2b6031f4e5e576655\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gzpgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qcb97\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146742 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146839 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146867 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.146890 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.164100 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npvkf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"081252dc-3eaa-4608-8b06-16c377dff2e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c22c1ac296ab4048b03d521675c6222a66d1007f4dde26d2d5a7ea4a108fbe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1cd86bdf0810bb714cf1b5dc9406913240e2c2cb075850401845d6bd0a9e021\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db4f99991cf6f81209281685dd933512f9df5e5e1ec1da7aed256256aa83e138\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99ef33b958ed242f6d015c312d654261f9cf3da45e4a0b5f500e2ffcb2339226\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c695a263a470eebdb567b620df349bcb94adbad81bfed22cd9c4c1a65844d830\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd88956c55324e65570fab5cb77ff6913f32fd3064f9d77bef1db564752a6073\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95f495d13b8feb2670e2e6caf565b6bfebf557c825fc392bda5ec7f0f7cad4a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4tglt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npvkf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.185084 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07280471-d907-4c1f-a38f-9337ecb04b43\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0a68a90c84c384016b30e31a0dae292a36a85338731f096181443deec0a036e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a1aa5110b30c48bccc0d22e7e40da48a9f0709ac2a79ffc9afce33e6113b069\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzsm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8lmbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.209792 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"047ce610-09fa-482b-8d29-45ad376d12b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:55:57Z\\\",\\\"message\\\":\\\"ps://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:55:57Z is after 2025-08-24T17:21:41Z]\\\\nI0131 14:55:57.965475 6780 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0131 14:55:57.965601 6780 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxcsm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtknf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.231570 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dac69ab-f0fe-47f2-b03a-a78a0ede4fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a88a2346ee1adf65c2806d13514d8c012ea043785bf808123e8639f67f956f97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98b72a7d666db4ba071d18272e07a6c063ed4a68c874d76621958647a9ac43fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://841434ac2aa4cdb9dc1a36bb53d7b10fa7b1b70602a81d0a2aef23bd1ededc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f41b52538916f689a998d86c82fcb27a899fec25b5ed9e7760213744f31cd6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.249913 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.249998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250019 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250045 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.250063 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.258178 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91f31b51-f3f1-446b-ad57-42dc89fb54e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dad32475c1e4bd832bb6b9186238500f6ecfa7b7f724763e395b9ed7f610977e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef505f022f5a778a7530af7a06747dfbc1b2a962c9c685cb061422cbceba931\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a1cbaf13955986cf1c097898a17e4102c9d5e2375034ea17e14e030a32040d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://337c30a20da9dba9b83ffbda214d4cae3925a6e04e44eacfaad232073a1c0142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4914679d7eea3f72292709616cdbef4f0a70632b9e4bf250a91fb3409e8b1b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c0283e5088ff53cf4969895ce8369b6f5b2f35388c257eb4138bbbe4844a8dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb217ae22fae874f1e3472f8e28e89ad7b9b69e1ecb30e36ce416fa5a7dd189f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4a255b921636b8d8f360278cabbc9586e7cbfc491b35bce9639c751396703cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:54:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.276082 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://484a1da4637145d0e21fab45756a1ef10f94d8c12d4b14be6ceb770087f03355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e938f501982a9f0dea77dc6903b51be197ad9ccca86fe91584658fb7e595e0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.293769 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.310853 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ghn8r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82ab6d11-5754-4903-ac36-bb0279dfa1fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f17cb493256614df175b83a4bc271a70b1e7ef9501db2f3c8703029f359fd07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vlrxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ghn8r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.326551 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d1f3628-a7fe-4094-a313-96c0469fcf78\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39716df3b4bec0566e17c818dc29a3dbb401fad138d924858268c7411ed627cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pkx2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:55:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9wp2x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.344952 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81bf01b9-0e9a-4577-afa6-49406bae97f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59e863060ca603e7aa33de1c446a22c1d3d6e135d0d6a278580940d3de661579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d8627c371e54fe5b5dd838ef51fbce82ad336368353226b8cd7b56adc51d063\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a7c3afa1359bc2630959a30c34557fd8c8a52f0065af236f8d086a2f1e56098\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:54:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353055 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353131 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353150 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.353194 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.364224 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82ca0064cfce0714a2ac5b0b3f7671a7d1076ca17c1520699f24d05cfb40018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.383521 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.403092 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.418686 4763 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:55:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://443733216cdaa990ae82465ed862f070d1476580a1fc95f7ee2e6bd4c879b75d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:55:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:56:11Z is after 2025-08-24T17:21:41Z" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.455899 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456176 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456324 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456468 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.456601 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559687 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559803 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559851 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.559871 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662087 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662130 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662158 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.662170 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764673 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764745 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764758 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764775 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.764788 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867141 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867467 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867586 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867684 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.867800 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.970928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.970996 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971013 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971037 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:11 crc kubenswrapper[4763]: I0131 14:56:11.971055 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:11Z","lastTransitionTime":"2026-01-31T14:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.051481 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:13:55.881637622 +0000 UTC Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.073751 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074503 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.074869 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178099 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178148 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178188 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.178203 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281763 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281872 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281895 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281919 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.281937 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.384680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.384970 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385168 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.385254 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488315 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488659 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488772 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488887 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.488972 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591602 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.591976 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.592001 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.694993 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695086 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695103 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695126 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.695144 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798116 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798124 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798137 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.798145 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900680 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900767 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900783 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:12 crc kubenswrapper[4763]: I0131 14:56:12.900818 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:12Z","lastTransitionTime":"2026-01-31T14:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003533 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003577 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003608 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.003621 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041006 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041095 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041143 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041102 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.041285 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041348 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041528 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.041747 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.043399 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:13 crc kubenswrapper[4763]: E0131 14:56:13.043783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.052202 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:12:51.396572971 +0000 UTC Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105854 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105910 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105928 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105950 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.105975 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.208963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209016 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.209061 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311492 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311585 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311612 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.311628 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.414998 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415049 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415063 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.415303 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518686 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518800 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518817 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518843 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.518860 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623051 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623139 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623163 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.623181 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726554 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726631 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726651 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726677 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.726725 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829470 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829529 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829542 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.829577 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932749 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932788 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932815 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:13 crc kubenswrapper[4763]: I0131 14:56:13.932829 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:13Z","lastTransitionTime":"2026-01-31T14:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036804 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036822 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.036939 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.052521 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:09:24.203591128 +0000 UTC Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141271 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141376 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141408 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.141514 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244363 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244405 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244418 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244436 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.244449 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346595 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346645 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346661 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346681 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.346715 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449802 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449884 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449908 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.449928 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552242 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552317 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552329 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552353 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.552369 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654862 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654959 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.654974 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757849 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757882 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757892 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757907 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.757918 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859671 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859716 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859724 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859737 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.859745 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962864 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962926 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962947 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.962978 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:14 crc kubenswrapper[4763]: I0131 14:56:14.963000 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:14Z","lastTransitionTime":"2026-01-31T14:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.041499 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.041629 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.041815 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.041868 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.042021 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.042082 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.042538 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:15 crc kubenswrapper[4763]: E0131 14:56:15.042654 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.054736 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:40:36.988407749 +0000 UTC Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065423 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065520 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.065571 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169253 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169319 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169352 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.169374 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272182 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272289 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.272311 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374688 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374759 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374776 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374797 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.374813 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477331 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477395 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477412 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477437 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.477455 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580652 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580735 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580778 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.580795 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683245 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683496 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683596 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683670 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.683783 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785752 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785805 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785823 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785846 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.785865 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888415 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888473 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888494 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888518 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.888536 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991047 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991104 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991121 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991145 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:15 crc kubenswrapper[4763]: I0131 14:56:15.991165 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:15Z","lastTransitionTime":"2026-01-31T14:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.055559 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:35:00.188358831 +0000 UTC Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094438 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094818 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.094971 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.095151 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.095286 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198662 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198878 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198915 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.198945 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.199057 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301507 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301563 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301591 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301620 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.301640 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404824 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404847 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404876 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.404899 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.508965 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509024 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509038 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509061 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.509077 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611589 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611668 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611689 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.611781 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714264 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714339 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714360 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714392 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.714475 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816756 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816801 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816812 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816829 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.816842 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919210 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919227 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919251 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:16 crc kubenswrapper[4763]: I0131 14:56:16.919268 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:16Z","lastTransitionTime":"2026-01-31T14:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022463 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022534 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022549 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022571 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.022587 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.040912 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.041144 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.041542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.041685 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.042011 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.042151 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.042509 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:17 crc kubenswrapper[4763]: E0131 14:56:17.042631 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.056279 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:45:39.154386198 +0000 UTC Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125429 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125476 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125493 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125516 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.125533 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228012 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228068 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228087 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228109 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.228124 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330754 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.330833 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.331247 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.331304 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.434963 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435031 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435048 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435074 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.435091 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538485 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538580 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538597 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538654 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.538679 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.640979 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641046 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641069 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641097 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.641118 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743217 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743287 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743327 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743359 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.743382 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846811 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846865 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846879 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846902 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.846916 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950246 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950325 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950351 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950382 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:17 crc kubenswrapper[4763]: I0131 14:56:17.950404 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:17Z","lastTransitionTime":"2026-01-31T14:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054003 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054057 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054067 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054085 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.054097 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.058186 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:15:10.447121742 +0000 UTC Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158009 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158138 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158166 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158197 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.158222 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168877 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168922 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168941 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168960 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.168976 4763 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:56:18Z","lastTransitionTime":"2026-01-31T14:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.241183 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69"] Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.242083 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.243828 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.244397 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.244869 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.246472 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.294030 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=77.293997689 podStartE2EDuration="1m17.293997689s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.276921849 +0000 UTC m=+98.031660182" watchObservedRunningTime="2026-01-31 14:56:18.293997689 +0000 UTC m=+98.048736022" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.326933 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.326992 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327065 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327118 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.327160 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.351953 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podStartSLOduration=78.351925138 podStartE2EDuration="1m18.351925138s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.35022558 +0000 UTC m=+98.104963913" watchObservedRunningTime="2026-01-31 14:56:18.351925138 +0000 UTC m=+98.106663471" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.352745 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ghn8r" podStartSLOduration=78.35273186 podStartE2EDuration="1m18.35273186s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.331438192 +0000 UTC m=+98.086176515" watchObservedRunningTime="2026-01-31 14:56:18.35273186 +0000 UTC m=+98.107470193" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.422004 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.421985287 podStartE2EDuration="49.421985287s" podCreationTimestamp="2026-01-31 14:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.421973077 +0000 UTC m=+98.176711420" watchObservedRunningTime="2026-01-31 14:56:18.421985287 +0000 UTC m=+98.176723590" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428511 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428629 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428628 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428664 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428714 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.428827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/04ba9988-7dc2-41c2-bebf-9f6308ecd013-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.430487 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04ba9988-7dc2-41c2-bebf-9f6308ecd013-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.441045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ba9988-7dc2-41c2-bebf-9f6308ecd013-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.457634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04ba9988-7dc2-41c2-bebf-9f6308ecd013-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wfx69\" (UID: \"04ba9988-7dc2-41c2-bebf-9f6308ecd013\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.521231 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.521213037 podStartE2EDuration="1m15.521213037s" podCreationTimestamp="2026-01-31 14:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.520922998 +0000 UTC m=+98.275661331" watchObservedRunningTime="2026-01-31 14:56:18.521213037 +0000 UTC m=+98.275951340" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.539741 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qzkhg" podStartSLOduration=78.539719027 podStartE2EDuration="1m18.539719027s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.538882913 +0000 UTC m=+98.293621236" watchObservedRunningTime="2026-01-31 14:56:18.539719027 +0000 UTC m=+98.294457330" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.565276 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.578622 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.578605211 podStartE2EDuration="1m20.578605211s" podCreationTimestamp="2026-01-31 14:54:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.569236827 +0000 UTC m=+98.323975120" watchObservedRunningTime="2026-01-31 14:56:18.578605211 +0000 UTC m=+98.333343514" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.597224 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qcb97" podStartSLOduration=79.597203413 podStartE2EDuration="1m19.597203413s" podCreationTimestamp="2026-01-31 14:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.579387182 +0000 UTC m=+98.334125485" watchObservedRunningTime="2026-01-31 14:56:18.597203413 +0000 UTC m=+98.351941716" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.614022 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-npvkf" podStartSLOduration=78.614003606 podStartE2EDuration="1m18.614003606s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.598812248 +0000 UTC m=+98.353550551" watchObservedRunningTime="2026-01-31 14:56:18.614003606 +0000 UTC m=+98.368741899" Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.629950 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" event={"ID":"04ba9988-7dc2-41c2-bebf-9f6308ecd013","Type":"ContainerStarted","Data":"e3defdf134a02e5ab7a7ac2cac05ae140aea438d3445df0ebf977475e0c63d7a"} Jan 31 14:56:18 crc kubenswrapper[4763]: I0131 14:56:18.633320 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8lmbv" podStartSLOduration=77.633307039 podStartE2EDuration="1m17.633307039s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.614479409 +0000 UTC m=+98.369217702" watchObservedRunningTime="2026-01-31 14:56:18.633307039 +0000 UTC m=+98.388045332" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.035063 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.035292 4763 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.035392 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs podName:84302428-88e1-47ba-84cc-7d12472f9aa2 nodeName:}" failed. No retries permitted until 2026-01-31 14:57:23.035367922 +0000 UTC m=+162.790106325 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs") pod "network-metrics-daemon-26pm5" (UID: "84302428-88e1-47ba-84cc-7d12472f9aa2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.040976 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041063 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041079 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.041271 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041266 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041447 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041578 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:19 crc kubenswrapper[4763]: E0131 14:56:19.041765 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.059435 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:18:19.1755932 +0000 UTC Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.059497 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.069037 4763 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.636744 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" event={"ID":"04ba9988-7dc2-41c2-bebf-9f6308ecd013","Type":"ContainerStarted","Data":"15ac6022b7ad6aeb12b73c83dfd692e9afc1518b8f551b05985fb7ad56c6c7b7"} Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.658310 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.658282294 podStartE2EDuration="17.658282294s" podCreationTimestamp="2026-01-31 14:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:18.633458323 +0000 UTC m=+98.388196616" watchObservedRunningTime="2026-01-31 14:56:19.658282294 +0000 UTC m=+99.413020627" Jan 31 14:56:19 crc kubenswrapper[4763]: I0131 14:56:19.659521 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wfx69" podStartSLOduration=79.659513149 podStartE2EDuration="1m19.659513149s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:19.658064758 +0000 UTC m=+99.412803091" watchObservedRunningTime="2026-01-31 14:56:19.659513149 +0000 UTC m=+99.414251482" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041381 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041434 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.041502 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043456 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:21 crc kubenswrapper[4763]: I0131 14:56:21.043518 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043690 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.043935 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:21 crc kubenswrapper[4763]: E0131 14:56:21.044082 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041749 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041765 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.041944 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.041979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042102 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042183 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:23 crc kubenswrapper[4763]: I0131 14:56:23.042628 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:23 crc kubenswrapper[4763]: E0131 14:56:23.042756 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:24 crc kubenswrapper[4763]: I0131 14:56:24.042160 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:24 crc kubenswrapper[4763]: E0131 14:56:24.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtknf_openshift-ovn-kubernetes(047ce610-09fa-482b-8d29-45ad376d12b3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041447 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041524 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041558 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041622 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:25 crc kubenswrapper[4763]: I0131 14:56:25.041732 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041738 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041834 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:25 crc kubenswrapper[4763]: E0131 14:56:25.041908 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.040798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.040886 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.040915 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.041058 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.041126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.041298 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:27 crc kubenswrapper[4763]: I0131 14:56:27.042220 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:27 crc kubenswrapper[4763]: E0131 14:56:27.042520 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041073 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041146 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041346 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.041531 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041783 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.041975 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:29 crc kubenswrapper[4763]: I0131 14:56:29.042006 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:29 crc kubenswrapper[4763]: E0131 14:56:29.042192 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041581 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041719 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041725 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:31 crc kubenswrapper[4763]: I0131 14:56:31.041909 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043125 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043341 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043480 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:31 crc kubenswrapper[4763]: E0131 14:56:31.043586 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041456 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041526 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041599 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041600 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:33 crc kubenswrapper[4763]: I0131 14:56:33.041671 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041788 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.041972 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:33 crc kubenswrapper[4763]: E0131 14:56:33.042113 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.695721 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696371 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/0.log" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696413 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" exitCode=1 Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44"} Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696488 4763 scope.go:117] "RemoveContainer" containerID="e03cc15c7dbdaad2939af00cc7f2e9c819db86c4311a7b74d6b7948de89af947" Jan 31 14:56:34 crc kubenswrapper[4763]: I0131 14:56:34.696946 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 14:56:34 crc kubenswrapper[4763]: E0131 14:56:34.697122 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.041814 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.041900 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.041972 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.042066 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042261 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.042307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042375 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:35 crc kubenswrapper[4763]: E0131 14:56:35.042446 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:35 crc kubenswrapper[4763]: I0131 14:56:35.701554 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041364 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041549 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.041604 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041656 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:37 crc kubenswrapper[4763]: I0131 14:56:37.041798 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.041841 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.042071 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:37 crc kubenswrapper[4763]: E0131 14:56:37.042246 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.043075 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.713907 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.716823 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerStarted","Data":"0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9"} Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.717252 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.751877 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podStartSLOduration=98.751846873 podStartE2EDuration="1m38.751846873s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:56:38.748713984 +0000 UTC m=+118.503452297" watchObservedRunningTime="2026-01-31 14:56:38.751846873 +0000 UTC m=+118.506585216" Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.839159 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:56:38 crc kubenswrapper[4763]: I0131 14:56:38.839318 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:38 crc kubenswrapper[4763]: E0131 14:56:38.839447 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.041860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.041985 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.042260 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.042322 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:39 crc kubenswrapper[4763]: I0131 14:56:39.042759 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:39 crc kubenswrapper[4763]: E0131 14:56:39.044515 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.041342 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.041494 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.043676 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.043664 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:41 crc kubenswrapper[4763]: I0131 14:56:41.043832 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.043955 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.044022 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.044167 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.079040 4763 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 14:56:41 crc kubenswrapper[4763]: E0131 14:56:41.158035 4763 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.040964 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041039 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041526 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041123 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041650 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:43 crc kubenswrapper[4763]: I0131 14:56:43.041050 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041819 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:43 crc kubenswrapper[4763]: E0131 14:56:43.041997 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041139 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041288 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041340 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:45 crc kubenswrapper[4763]: I0131 14:56:45.041346 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041479 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041612 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:45 crc kubenswrapper[4763]: E0131 14:56:45.041658 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:46 crc kubenswrapper[4763]: E0131 14:56:46.159136 4763 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041150 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041348 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:47 crc kubenswrapper[4763]: I0131 14:56:47.041272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041446 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041613 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041826 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:47 crc kubenswrapper[4763]: E0131 14:56:47.041967 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041083 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041308 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041128 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.041615 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041811 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041926 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:49 crc kubenswrapper[4763]: E0131 14:56:49.041992 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.042011 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.762021 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 14:56:49 crc kubenswrapper[4763]: I0131 14:56:49.762328 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6"} Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040904 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040946 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.040994 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:51 crc kubenswrapper[4763]: I0131 14:56:51.041050 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042251 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042433 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042613 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:56:51 crc kubenswrapper[4763]: E0131 14:56:51.042747 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-26pm5" podUID="84302428-88e1-47ba-84cc-7d12472f9aa2" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041501 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041538 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.041741 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.044327 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.044680 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045110 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045186 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045317 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:56:53 crc kubenswrapper[4763]: I0131 14:56:53.045530 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:56:57 crc kubenswrapper[4763]: I0131 14:56:57.013907 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.114088 4763 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.157928 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.158443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.160850 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.161429 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.165097 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.165372 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.166284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.166773 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.187197 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.188854 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.189542 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.194763 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195256 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195275 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195379 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195483 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195604 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195660 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195790 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195858 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195909 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.195991 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.197604 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200752 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.201674 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200766 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200874 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200935 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.202223 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.202165 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200970 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.200989 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.201025 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.205658 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.206159 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.207601 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.208318 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209219 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209236 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209260 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209276 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209343 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.209658 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.213840 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214170 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214436 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.214790 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.215272 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.215521 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.220894 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.221396 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.221687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.230679 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.235902 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.260117 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.260509 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-87f9c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261020 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261068 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261297 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.261642 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.262343 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.262576 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263093 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263314 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.263782 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265270 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265409 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.265847 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.266174 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.266458 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.269754 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.270262 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.270826 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.272990 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273056 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273238 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.273312 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.279972 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.280770 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.281086 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.281327 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.282662 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.283182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.286746 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.288552 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.289831 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.290867 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.291750 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.292993 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.293611 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.296712 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.297320 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.297942 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.298246 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.301277 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303004 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303521 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.303668 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.304238 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.304720 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.307852 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.308607 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.308959 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309496 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.309845 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312272 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312318 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312373 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312392 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312411 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312435 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312458 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312482 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312508 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312530 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312610 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312630 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312651 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312672 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312720 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312803 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.312855 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.313577 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.314091 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.314910 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315095 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315301 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.315439 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316417 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316580 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316611 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316784 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316920 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.316968 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317036 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317093 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317136 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317404 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.317563 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.319563 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320057 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320208 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320310 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320404 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320507 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320601 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320717 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320815 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.320903 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321023 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321465 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321658 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321780 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.321976 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322086 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322148 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322185 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322251 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322260 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.322923 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w9cb6"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.341903 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342155 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342193 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342222 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342289 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342307 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342371 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342389 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342430 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342462 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342573 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.342773 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343100 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343167 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343188 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343209 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343228 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.343434 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.344250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.346263 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.352071 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360430 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360523 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.360802 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361088 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361200 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361204 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361479 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.361605 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362086 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362168 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362402 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362567 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.362740 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363353 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363439 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363514 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363532 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363614 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363649 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.363685 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.364905 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365076 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365249 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.365395 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366373 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.366560 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369505 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369736 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.369857 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373144 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373334 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373477 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.373580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374238 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374428 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.374798 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.376149 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.376956 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.378186 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.382460 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.385213 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.387904 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.388927 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.389440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.389586 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.390651 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.392549 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.393970 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.395984 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.397095 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.397245 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.398287 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400802 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400841 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.400851 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.402183 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.403262 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.403820 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.424165 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.424273 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.427417 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.428836 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.429615 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.429732 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.431851 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.433947 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.435369 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.437656 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.440064 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.441938 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.442759 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444192 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444502 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444535 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444558 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444583 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444605 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444629 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444652 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444712 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444735 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444754 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444779 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444803 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444827 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444873 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444894 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444937 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.444980 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445000 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445022 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445043 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445066 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445094 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445134 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445184 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445203 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445250 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445422 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.445455 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.446507 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-auth-proxy-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.446822 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96ce635a-c905-4317-9f6d-64e1437d95c2-config\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447301 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-console-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447844 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447853 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.447974 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-images\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448054 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-oauth-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448667 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-config\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.448829 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-trusted-ca-bundle\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.449209 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0cddc243-3a83-4398-87a9-7a111581bec5-service-ca\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.449269 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450123 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450342 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/954567bc-27c1-40c6-8fa3-8f653f90c199-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450801 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-config\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.450968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-oauth-config\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451205 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-metrics-tls\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/156e6a74-f3a0-4ae0-8233-36da8946b7d6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.451316 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/96ce635a-c905-4317-9f6d-64e1437d95c2-machine-approver-tls\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452215 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452504 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/954567bc-27c1-40c6-8fa3-8f653f90c199-serving-cert\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.452746 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed007985-f681-4a45-a71a-ba27798fa94d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.453561 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cddc243-3a83-4398-87a9-7a111581bec5-console-serving-cert\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.453653 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.456084 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.457347 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.458222 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.459408 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.460110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.460375 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.463620 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.480883 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.500640 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.520957 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.540867 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.549193 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-metrics-tls\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.568546 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.570991 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-trusted-ca\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.580294 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.600676 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.620413 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.641797 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.661131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.681203 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.701776 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.720533 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.730545 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65c4b2d3-8915-480e-abf5-3b3e0184f778-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.741171 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.751339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65c4b2d3-8915-480e-abf5-3b3e0184f778-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.761091 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.781688 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.805444 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.821093 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.842188 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.861144 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.881949 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.901672 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.922278 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.941284 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.961004 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:56:59 crc kubenswrapper[4763]: I0131 14:56:59.982435 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.001178 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.020274 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.040981 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.061513 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.102133 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.120813 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.130955 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/156e6a74-f3a0-4ae0-8233-36da8946b7d6-proxy-tls\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.141318 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.162004 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.181557 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.201067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.221858 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.241389 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.270321 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.281192 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.299778 4763 request.go:700] Waited for 1.00956254s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.301582 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.321750 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.341092 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.367096 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.381003 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.401457 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.421513 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.441275 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.460817 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.482148 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.501185 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.520566 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.541202 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.561201 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.580950 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.602063 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.621866 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.641396 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.662063 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.681954 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.702556 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.721952 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.742127 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.762462 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.782339 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.801062 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.820683 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.841246 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.862006 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.882054 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.900561 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.921042 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.942066 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.961355 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:57:00 crc kubenswrapper[4763]: I0131 14:57:00.982042 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.021789 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.031233 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"controller-manager-879f6c89f-bpxtg\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.042320 4763 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.061194 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.101368 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.121159 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.141961 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.192570 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7b2p\" (UniqueName: \"kubernetes.io/projected/954567bc-27c1-40c6-8fa3-8f653f90c199-kube-api-access-p7b2p\") pod \"authentication-operator-69f744f599-wjjvp\" (UID: \"954567bc-27c1-40c6-8fa3-8f653f90c199\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.210302 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.228968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmpdd\" (UniqueName: \"kubernetes.io/projected/ed007985-f681-4a45-a71a-ba27798fa94d-kube-api-access-cmpdd\") pod \"cluster-samples-operator-665b6dd947-75x7z\" (UID: \"ed007985-f681-4a45-a71a-ba27798fa94d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.248714 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhh8\" (UniqueName: \"kubernetes.io/projected/7540b5d1-cac8-4c3d-88f1-cd961bf8bd47-kube-api-access-flhh8\") pod \"dns-operator-744455d44c-jj6qz\" (UID: \"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47\") " pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.249427 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.261447 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csx57\" (UniqueName: \"kubernetes.io/projected/156e6a74-f3a0-4ae0-8233-36da8946b7d6-kube-api-access-csx57\") pod \"machine-config-controller-84d6567774-zznr9\" (UID: \"156e6a74-f3a0-4ae0-8233-36da8946b7d6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.282850 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9n7\" (UniqueName: \"kubernetes.io/projected/96ce635a-c905-4317-9f6d-64e1437d95c2-kube-api-access-ft9n7\") pod \"machine-approver-56656f9798-zd45q\" (UID: \"96ce635a-c905-4317-9f6d-64e1437d95c2\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.285791 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.296710 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hlp\" (UniqueName: \"kubernetes.io/projected/5d9ac26c-eb66-4772-b7ee-a6b646092c4b-kube-api-access-89hlp\") pod \"machine-api-operator-5694c8668f-bwc2g\" (UID: \"5d9ac26c-eb66-4772-b7ee-a6b646092c4b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.319266 4763 request.go:700] Waited for 1.868969302s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.320801 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krq7\" (UniqueName: \"kubernetes.io/projected/db0aea6c-f6f8-4548-905b-22d810b334d4-kube-api-access-9krq7\") pod \"downloads-7954f5f757-bh727\" (UID: \"db0aea6c-f6f8-4548-905b-22d810b334d4\") " pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.350124 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7plc\" (UniqueName: \"kubernetes.io/projected/0cddc243-3a83-4398-87a9-7a111581bec5-kube-api-access-v7plc\") pod \"console-f9d7485db-9lvgt\" (UID: \"0cddc243-3a83-4398-87a9-7a111581bec5\") " pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.356426 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.366345 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfbv\" (UniqueName: \"kubernetes.io/projected/65c4b2d3-8915-480e-abf5-3b3e0184f778-kube-api-access-qqfbv\") pod \"openshift-controller-manager-operator-756b6f6bc6-2jtkm\" (UID: \"65c4b2d3-8915-480e-abf5-3b3e0184f778\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.379668 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.389251 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8f5\" (UniqueName: \"kubernetes.io/projected/4c48bb3f-235a-4dcd-ba1a-62f85f8946ac-kube-api-access-8s8f5\") pod \"ingress-operator-5b745b69d9-fkt75\" (UID: \"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.389352 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.400381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbl5\" (UniqueName: \"kubernetes.io/projected/bb78095b-d026-498f-9616-d8365161f809-kube-api-access-tbbl5\") pod \"migrator-59844c95c7-qpplg\" (UID: \"bb78095b-d026-498f-9616-d8365161f809\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.401107 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.421465 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.441847 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.460795 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.471025 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.500890 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjjvp"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.501458 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" Jan 31 14:57:01 crc kubenswrapper[4763]: W0131 14:57:01.532849 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954567bc_27c1_40c6_8fa3_8f653f90c199.slice/crio-da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205 WatchSource:0}: Error finding container da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205: Status 404 returned error can't find the container with id da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205 Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.540200 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.547813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571447 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571478 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571510 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571525 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571549 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571563 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571577 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571592 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571607 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571622 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571636 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571652 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571666 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571679 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571708 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571723 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571739 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571755 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571770 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571805 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571819 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571832 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571845 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571859 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571890 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571907 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571934 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571951 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571965 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.571998 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572015 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572029 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572042 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572059 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572072 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572087 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572102 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572136 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572149 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572164 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572177 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572201 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572215 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572229 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572253 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572274 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572288 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572303 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572318 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572335 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572366 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572380 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572395 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572420 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572445 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572459 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572485 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572500 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572514 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572542 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572556 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572588 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572604 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572621 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572650 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.572716 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.573276 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.07325966 +0000 UTC m=+141.827997953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.592193 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.603175 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.607920 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.622672 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675319 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675584 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675655 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675688 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675734 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675757 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.675787 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.676365 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bwc2g"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.677067 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.677141 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.177127773 +0000 UTC m=+141.931866066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678105 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678141 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678164 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678187 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678241 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678268 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678293 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678319 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678343 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678411 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678436 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678464 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678490 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678512 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678539 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678563 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678711 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678754 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678779 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678800 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678824 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678849 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678876 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678901 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678954 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.678976 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679063 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679085 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679108 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679136 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679192 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679257 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679312 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679337 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679359 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679382 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679405 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679467 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679488 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679508 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679531 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679555 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679578 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679621 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679644 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679740 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679764 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679792 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679819 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679848 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679887 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679910 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679932 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679957 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.679982 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680026 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680084 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680132 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680151 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680171 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680208 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680231 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680265 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680286 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680305 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680350 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680393 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680420 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680461 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680482 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680502 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680522 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680545 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680570 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680593 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680632 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680660 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680682 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.680766 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.681860 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.683575 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-service-ca\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.683600 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31cdca6f-11b2-4888-9a4c-4b06a94d1863-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.684605 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.686889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.687904 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688098 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-policies\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688661 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688786 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688844 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688874 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688903 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688943 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688968 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.688993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689166 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-service-ca-bundle\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689233 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689297 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689324 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689352 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689394 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689419 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689442 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689466 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689505 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689563 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689584 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689647 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689674 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.689734 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.690584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-audit\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.692301 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-serving-cert\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.692509 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.693116 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.693134 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.694280 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.694760 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-serving-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.695396 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-config\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.696964 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.698253 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-config\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699075 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699330 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699609 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699639 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac92922f-89ed-41e7-bf6f-9750efc9cab0-audit-dir\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.699928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8edb2bab-1e72-4b68-afed-2de0572a1071-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.700176 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22ff872b-afc9-4fa7-812b-f47bb3add27c-proxy-tls\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.700560 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-etcd-client\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701198 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-metrics-certs\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701407 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701589 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701810 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-encryption-config\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-node-pullsecrets\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.701971 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8edb2bab-1e72-4b68-afed-2de0572a1071-config\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.702638 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.703719 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31cdca6f-11b2-4888-9a4c-4b06a94d1863-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.704369 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.705643 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f56211-548f-4d20-9c0a-70108a8f557b-config\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.706105 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/330d3fd9-790f-406d-a122-152a1ab07e5c-audit-dir\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.706775 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.707151 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.20713732 +0000 UTC m=+141.961875613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.707307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.708119 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.708269 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/22ff872b-afc9-4fa7-812b-f47bb3add27c-images\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.709141 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-serving-cert\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.710995 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.711971 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.712404 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/229488e3-89a8-4eb4-841e-980db3f8cfb3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.713215 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-trusted-ca\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.713598 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/330d3fd9-790f-406d-a122-152a1ab07e5c-image-import-ca\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.714816 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-serving-cert\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.716498 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.718728 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-default-certificate\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.718944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719793 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-etcd-client\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719869 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f56211-548f-4d20-9c0a-70108a8f557b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.719977 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ac92922f-89ed-41e7-bf6f-9750efc9cab0-encryption-config\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.720250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/330d3fd9-790f-406d-a122-152a1ab07e5c-etcd-client\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.724566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-serving-cert\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.737624 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/229488e3-89a8-4eb4-841e-980db3f8cfb3-serving-cert\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.740824 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.749827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-stats-auth\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.750113 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmj5\" (UniqueName: \"kubernetes.io/projected/22ff872b-afc9-4fa7-812b-f47bb3add27c-kube-api-access-bkmj5\") pod \"machine-config-operator-74547568cd-rltbk\" (UID: \"22ff872b-afc9-4fa7-812b-f47bb3add27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.779507 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj96k\" (UniqueName: \"kubernetes.io/projected/d2c4bb39-a442-4316-81a9-d5e8f9d10eaa-kube-api-access-mj96k\") pod \"console-operator-58897d9998-mjbd9\" (UID: \"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa\") " pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790651 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790687 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790715 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790730 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790761 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790784 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790805 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790826 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790848 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790877 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790892 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790908 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790927 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790941 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790957 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790972 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.790987 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791002 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791017 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791036 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791054 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791068 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791101 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791117 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791133 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791149 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791166 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791185 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791200 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791217 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791235 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791297 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791314 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791331 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791348 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791395 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.792557 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.292521295 +0000 UTC m=+142.047259588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.791442 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793519 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793567 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793622 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793652 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793686 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793762 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793790 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793825 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.793850 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.794346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-plugins-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795095 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795498 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.795987 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84d11c6f-169b-4e21-87ec-8bb8930a1831-tmpfs\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.796748 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.796875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f313ecec-c631-4270-a297-51e482e3e306-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09c58528-2088-4902-ab32-10cd90be0562-config\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-socket-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797724 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-csi-data-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797874 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6758143-5085-416e-9bdc-856a520c71de-signing-cabundle\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.797963 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-registration-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.798264 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-config-volume\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799096 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d73a5142-56cf-4676-a6f1-a00868938c4d-mountpoint-dir\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799381 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.799978 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc5ca38-64fe-41f8-a989-0b035bf29414-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.800628 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-apiservice-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.801413 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-srv-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803136 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e043d261-8774-411b-be6d-98dbb1f210a2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803308 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-node-bootstrap-token\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803350 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-profile-collector-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc5ca38-64fe-41f8-a989-0b035bf29414-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6758143-5085-416e-9bdc-856a520c71de-signing-key\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803837 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.803855 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-metrics-tls\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804477 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f313ecec-c631-4270-a297-51e482e3e306-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804834 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09c58528-2088-4902-ab32-10cd90be0562-serving-cert\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804878 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.804933 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/92ef1804-52cd-46a1-86e1-baf561981f8b-cert\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.805294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301a24de-a6b1-45a1-a12d-663325e45fd6-srv-cert\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806338 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a766d0cf-2406-4406-aaec-51a9da3d6b55-certs\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806771 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f2d8117-c3e5-498f-8458-e72238d0f0ac-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.806938 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80834dac-7e21-4dda-8f32-3a19eced5753-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.807003 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84d11c6f-169b-4e21-87ec-8bb8930a1831-webhook-cert\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.811855 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.812090 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a7826828-7856-44a4-be9f-f1a939950c3e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820551 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"63cb66984f9c0a300f5da8745509ae18241a5d125ff6b08817258f5e539b6bf5"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820608 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"6c08a39d0c6c3c6b1f6ca8054506acc7fdd081a33e2bb00f17fe269e7c284842"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.820757 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"route-controller-manager-6576b87f9c-mpmpg\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.822732 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lvgt"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824263 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerStarted","Data":"b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824304 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerStarted","Data":"5825be7f5b0a2372a0714ff20d5e467974a43da635470237d607739086eb1094"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.824925 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.827024 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.827731 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.828064 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.835650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctc8x\" (UniqueName: \"kubernetes.io/projected/47ac991e-3a26-4da1-9cf0-6f0944a3bf7b-kube-api-access-ctc8x\") pod \"router-default-5444994796-87f9c\" (UID: \"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b\") " pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.838612 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" event={"ID":"954567bc-27c1-40c6-8fa3-8f653f90c199","Type":"ContainerStarted","Data":"0a0295e9478c783cc5c7eae8b9d1e576728bdcf28c9b61ba5e36bd581149c3aa"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.838645 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" event={"ID":"954567bc-27c1-40c6-8fa3-8f653f90c199","Type":"ContainerStarted","Data":"da9911c212d0087ba033bb74fab71ce73c49d040572cf055075f1e14bb37f205"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.841171 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"48eccc93a4902e5dbeedb0e4b6a546c340cb4fa2ad77c0a08618d843e1fa5198"} Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.859383 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqmrv\" (UniqueName: \"kubernetes.io/projected/31cdca6f-11b2-4888-9a4c-4b06a94d1863-kube-api-access-xqmrv\") pod \"openshift-apiserver-operator-796bbdcf4f-2bx5m\" (UID: \"31cdca6f-11b2-4888-9a4c-4b06a94d1863\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.880241 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"oauth-openshift-558db77b4-8pcvn\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.882394 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.893791 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrjn\" (UniqueName: \"kubernetes.io/projected/330d3fd9-790f-406d-a122-152a1ab07e5c-kube-api-access-mgrjn\") pod \"apiserver-76f77b778f-tv9s8\" (UID: \"330d3fd9-790f-406d-a122-152a1ab07e5c\") " pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.895102 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:01 crc kubenswrapper[4763]: E0131 14:57:01.896029 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.396012456 +0000 UTC m=+142.150750749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:01 crc kubenswrapper[4763]: W0131 14:57:01.916189 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ac991e_3a26_4da1_9cf0_6f0944a3bf7b.slice/crio-7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894 WatchSource:0}: Error finding container 7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894: Status 404 returned error can't find the container with id 7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894 Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.918414 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6kd\" (UniqueName: \"kubernetes.io/projected/2435f2fb-db3e-4a5a-910d-b2bdebfff9ed-kube-api-access-8p6kd\") pod \"etcd-operator-b45778765-nzj54\" (UID: \"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed\") " pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.925996 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.931981 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.944327 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6f56211-548f-4d20-9c0a-70108a8f557b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-8gxz5\" (UID: \"f6f56211-548f-4d20-9c0a-70108a8f557b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.960889 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z"] Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.961027 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8edb2bab-1e72-4b68-afed-2de0572a1071-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-c2zm8\" (UID: \"8edb2bab-1e72-4b68-afed-2de0572a1071\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.976370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjb6g\" (UniqueName: \"kubernetes.io/projected/229488e3-89a8-4eb4-841e-980db3f8cfb3-kube-api-access-cjb6g\") pod \"openshift-config-operator-7777fb866f-5zjq4\" (UID: \"229488e3-89a8-4eb4-841e-980db3f8cfb3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.995630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44m57\" (UniqueName: \"kubernetes.io/projected/ac92922f-89ed-41e7-bf6f-9750efc9cab0-kube-api-access-44m57\") pod \"apiserver-7bbb656c7d-nwfnl\" (UID: \"ac92922f-89ed-41e7-bf6f-9750efc9cab0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.996816 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:01 crc kubenswrapper[4763]: I0131 14:57:01.997139 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.000624 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.500599652 +0000 UTC m=+142.255337965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.005875 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.044315 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqgf\" (UniqueName: \"kubernetes.io/projected/301a24de-a6b1-45a1-a12d-663325e45fd6-kube-api-access-wcqgf\") pod \"catalog-operator-68c6474976-4fhtj\" (UID: \"301a24de-a6b1-45a1-a12d-663325e45fd6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.048431 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.055607 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jj6qz"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.062401 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp4j4\" (UniqueName: \"kubernetes.io/projected/e043d261-8774-411b-be6d-98dbb1f210a2-kube-api-access-hp4j4\") pod \"olm-operator-6b444d44fb-jtt2l\" (UID: \"e043d261-8774-411b-be6d-98dbb1f210a2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.064830 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mjbd9"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.078572 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzb2\" (UniqueName: \"kubernetes.io/projected/1113d5ad-40c9-412f-92c2-2fb0d6ec2903-kube-api-access-krzb2\") pod \"dns-default-l8kn4\" (UID: \"1113d5ad-40c9-412f-92c2-2fb0d6ec2903\") " pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.081543 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.099311 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.099818 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.599789959 +0000 UTC m=+142.354528252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.104385 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.105265 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpsn\" (UniqueName: \"kubernetes.io/projected/d73a5142-56cf-4676-a6f1-a00868938c4d-kube-api-access-mgpsn\") pod \"csi-hostpathplugin-5kfwr\" (UID: \"d73a5142-56cf-4676-a6f1-a00868938c4d\") " pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.126922 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.130555 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"collect-profiles-29497845-zn989\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.135173 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.136152 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.137591 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtqq\" (UniqueName: \"kubernetes.io/projected/80834dac-7e21-4dda-8f32-3a19eced5753-kube-api-access-vvtqq\") pod \"multus-admission-controller-857f4d67dd-852vg\" (UID: \"80834dac-7e21-4dda-8f32-3a19eced5753\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.149399 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.149446 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bh727"] Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.165071 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c48bb3f_235a_4dcd_ba1a_62f85f8946ac.slice/crio-80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d WatchSource:0}: Error finding container 80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d: Status 404 returned error can't find the container with id 80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.178076 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.180612 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsgk\" (UniqueName: \"kubernetes.io/projected/92ef1804-52cd-46a1-86e1-baf561981f8b-kube-api-access-zbsgk\") pod \"ingress-canary-5q274\" (UID: \"92ef1804-52cd-46a1-86e1-baf561981f8b\") " pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.203284 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.203546 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.703489236 +0000 UTC m=+142.458227529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.204055 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.204219 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjksc\" (UniqueName: \"kubernetes.io/projected/f313ecec-c631-4270-a297-51e482e3e306-kube-api-access-zjksc\") pod \"kube-storage-version-migrator-operator-b67b599dd-z8wss\" (UID: \"f313ecec-c631-4270-a297-51e482e3e306\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.205268 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.705256441 +0000 UTC m=+142.459994824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.214505 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.221136 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk"] Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.228667 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb78095b_d026_498f_9616_d8365161f809.slice/crio-15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152 WatchSource:0}: Error finding container 15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152: Status 404 returned error can't find the container with id 15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.232319 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"marketplace-operator-79b997595-flcgf\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.232759 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65c4b2d3_8915_480e_abf5_3b3e0184f778.slice/crio-d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a WatchSource:0}: Error finding container d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a: Status 404 returned error can't find the container with id d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.236304 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj8t\" (UniqueName: \"kubernetes.io/projected/e6758143-5085-416e-9bdc-856a520c71de-kube-api-access-swj8t\") pod \"service-ca-9c57cc56f-gxcjc\" (UID: \"e6758143-5085-416e-9bdc-856a520c71de\") " pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.242630 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.271950 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.272259 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-nzj54"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.272866 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rjv\" (UniqueName: \"kubernetes.io/projected/a766d0cf-2406-4406-aaec-51a9da3d6b55-kube-api-access-x2rjv\") pod \"machine-config-server-w9cb6\" (UID: \"a766d0cf-2406-4406-aaec-51a9da3d6b55\") " pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.275891 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.284536 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665h2\" (UniqueName: \"kubernetes.io/projected/84d11c6f-169b-4e21-87ec-8bb8930a1831-kube-api-access-665h2\") pod \"packageserver-d55dfcdfc-f7fgc\" (UID: \"84d11c6f-169b-4e21-87ec-8bb8930a1831\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.288202 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ff872b_afc9_4fa7_812b_f47bb3add27c.slice/crio-2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e WatchSource:0}: Error finding container 2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e: Status 404 returned error can't find the container with id 2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.295322 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.296468 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jtr\" (UniqueName: \"kubernetes.io/projected/a7826828-7856-44a4-be9f-f1a939950c3e-kube-api-access-p6jtr\") pod \"control-plane-machine-set-operator-78cbb6b69f-dncp9\" (UID: \"a7826828-7856-44a4-be9f-f1a939950c3e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.302079 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.306329 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.306630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.306957 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.806941097 +0000 UTC m=+142.561679390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.312275 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.314318 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1dc5ca38-64fe-41f8-a989-0b035bf29414-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-j7kx4\" (UID: \"1dc5ca38-64fe-41f8-a989-0b035bf29414\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.319457 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.325858 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.331717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.337123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbxj\" (UniqueName: \"kubernetes.io/projected/74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba-kube-api-access-9rbxj\") pod \"cluster-image-registry-operator-dc59b4c8b-9zjqh\" (UID: \"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.337676 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.346292 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9cb6" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.363533 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.364181 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg645\" (UniqueName: \"kubernetes.io/projected/09c58528-2088-4902-ab32-10cd90be0562-kube-api-access-gg645\") pod \"service-ca-operator-777779d784-g5s5p\" (UID: \"09c58528-2088-4902-ab32-10cd90be0562\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.369882 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.375397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5q274" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.377442 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p9ph\" (UniqueName: \"kubernetes.io/projected/6f2d8117-c3e5-498f-8458-e72238d0f0ac-kube-api-access-8p9ph\") pod \"package-server-manager-789f6589d5-l8wxr\" (UID: \"6f2d8117-c3e5-498f-8458-e72238d0f0ac\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.408823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.409127 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:02.909114257 +0000 UTC m=+142.663852550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.510313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.511215 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.011199784 +0000 UTC m=+142.765938077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.544872 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.554967 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.582418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.586870 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tv9s8"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.588275 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.588441 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.603628 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.613670 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.614009 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.113994203 +0000 UTC m=+142.868732496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.634849 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod229488e3_89a8_4eb4_841e_980db3f8cfb3.slice/crio-ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219 WatchSource:0}: Error finding container ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219: Status 404 returned error can't find the container with id ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.675060 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.706295 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.712070 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.713578 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.714263 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.715368 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.215346348 +0000 UTC m=+142.970084641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.817718 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.818265 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.31822746 +0000 UTC m=+143.072965753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: W0131 14:57:02.823606 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod275ea46d_7a78_4457_a5ba_7b3000170d0e.slice/crio-4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7 WatchSource:0}: Error finding container 4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7: Status 404 returned error can't find the container with id 4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7 Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.905953 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"4ae66d1e24ee960fa9743e61e520b26c4e663054f89f7c05c1946d2e50245607"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914150 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"cec37e3b98d59d901f64960a1b70127637b77fe0e693177f08873e4ace80a3a3"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"accb3b8144675dcb50127d1e274307b54a06be0debf1f047758f4b388282996b"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.914249 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" event={"ID":"156e6a74-f3a0-4ae0-8233-36da8946b7d6","Type":"ContainerStarted","Data":"4c657c0d5a8e347fa5049810271a5a9b811e2b7f7f8f68e2e0afbf29b28fa299"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.918407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerStarted","Data":"7b560d0bfea717d413c6f7f997b55322350cc5d7f6770006679df4dc57bee56a"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.920873 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:02 crc kubenswrapper[4763]: E0131 14:57:02.921223 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.421210615 +0000 UTC m=+143.175948908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.923680 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87f9c" event={"ID":"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b","Type":"ContainerStarted","Data":"21097b2e02072a318fcd7dfca12110d4bcc198574a952da3787017a8def86308"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.923721 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-87f9c" event={"ID":"47ac991e-3a26-4da1-9cf0-6f0944a3bf7b","Type":"ContainerStarted","Data":"7deed9c9f509572e5a31c37b70b7fba519cc4fd85f5536f01a7eb0069d39f894"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.947751 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerStarted","Data":"4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.952882 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l"] Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.959127 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bh727" event={"ID":"db0aea6c-f6f8-4548-905b-22d810b334d4","Type":"ContainerStarted","Data":"342946d2a4a384a6f0a19643ab5d90fa97f53de305c61a5302eccfa96b181f40"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.962849 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"2f94de76d368d97664c9676ded0277b97294b97bea27989e794d51fc6c81543e"} Jan 31 14:57:02 crc kubenswrapper[4763]: I0131 14:57:02.978134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" event={"ID":"96ce635a-c905-4317-9f6d-64e1437d95c2","Type":"ContainerStarted","Data":"9814c6fa74dbe73ca7edeb7eaf6e6de772b5dac45769694996c928a7aab5d1dc"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.003008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"2302157f6051cb6945834f1df0aa8520e57ccf4db5e78c1ec9c7e7913ec3cc7d"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.003047 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" event={"ID":"5d9ac26c-eb66-4772-b7ee-a6b646092c4b","Type":"ContainerStarted","Data":"f8d05686c2858bc92bd2e41e3aae29ac7958c66e517e0a32923e07d763aae11e"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.011072 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" event={"ID":"f6f56211-548f-4d20-9c0a-70108a8f557b","Type":"ContainerStarted","Data":"fbb2377f28aba3b4ba6092b125b0254a2a05b7fc8d467952776c243a93737a08"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.022171 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.022527 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.522513668 +0000 UTC m=+143.277251961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.024521 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"53b64499123d2a0acca61ae9705595efe58d6b21354977c8ac2416a44c6b1182"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.024572 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"15d3a12853d42ed33dc8bb430786b504cf974661a23c8706c5f9a3c19fe8f152"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.036295 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"aaa8ba458d3b60d29340529d0fa73caf87da8887c7c23d807766097f50118d7f"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.036340 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"80c11ffb6b6b7f3ace175bbbc27fa7d0fc3da7724863e9fa5d6a7e1ff907022d"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.102531 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-852vg"] Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.123772 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"9d35d644de1a97069cbe16d43105c4ac36ecd501fe816d2356dce1f49d07b428"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.123814 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"406b873df247435cb3b0a54b13deae5bf7bfd31bd8b90d2dec68d3da3b8a4441"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.124219 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.124394 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.125027 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.625011658 +0000 UTC m=+143.379749951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.132068 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerStarted","Data":"ad79a062a0d7376cfab59427c466c0d955dfdc62d38bc98da632da1ceeada219"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.133821 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lvgt" event={"ID":"0cddc243-3a83-4398-87a9-7a111581bec5","Type":"ContainerStarted","Data":"5c62f7ee183dfe1e029224782212786ff5c04dc0638b114fef841d2564e55e45"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.134167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lvgt" event={"ID":"0cddc243-3a83-4398-87a9-7a111581bec5","Type":"ContainerStarted","Data":"bb26d6e3f53b07cc5f909df05bd03481448ccbd13317a7a9613866e7d60a68d9"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.136444 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9cb6" event={"ID":"a766d0cf-2406-4406-aaec-51a9da3d6b55","Type":"ContainerStarted","Data":"e67dbae70f3848319fd820499e081558e078d976eda5c4b905a0590c1684b2c7"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.138713 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" event={"ID":"65c4b2d3-8915-480e-abf5-3b3e0184f778","Type":"ContainerStarted","Data":"d8872e708ab881cc8856cf37a1b15fe5412e07d9e31d6719c2e889aa0b19dd4a"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.141598 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"8a11bb5171ba5d9dc23d479eca3191eb0e7b909abf910bef0442cc55d9f1add6"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.141638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"9b91deda393240ac0e049bdae4b0ccfb5212211e9a307158fbaf777467aadc40"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" event={"ID":"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa","Type":"ContainerStarted","Data":"29dc94a7cacd0d6782c91a85697e89bb52396ab2bc3c603de238426daecf7d28"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" event={"ID":"d2c4bb39-a442-4316-81a9-d5e8f9d10eaa","Type":"ContainerStarted","Data":"7f20fa19d5bb86f940e6da3f6f5021f747522c3be2c435828c18b5855f767596"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.145782 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.150717 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" event={"ID":"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed","Type":"ContainerStarted","Data":"3803ed51792b02386aa7ef87726ea6a9518fc3a8a86643b1d5b77ebee1f3ad72"} Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.151737 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.151778 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.152039 4763 patch_prober.go:28] interesting pod/console-operator-58897d9998-mjbd9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.152066 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podUID="d2c4bb39-a442-4316-81a9-d5e8f9d10eaa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.226807 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.227195 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.727181268 +0000 UTC m=+143.481919561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.327376 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.328792 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.82877447 +0000 UTC m=+143.583512763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.429661 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.429977 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:03.929965379 +0000 UTC m=+143.684703662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.432707 4763 csr.go:261] certificate signing request csr-v6bb7 is approved, waiting to be issued Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.443875 4763 csr.go:257] certificate signing request csr-v6bb7 is issued Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.534001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.534195 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.034163122 +0000 UTC m=+143.788901425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.534364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.534758 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.034746801 +0000 UTC m=+143.789485094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: W0131 14:57:03.625846 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80834dac_7e21_4dda_8f32_3a19eced5753.slice/crio-d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074 WatchSource:0}: Error finding container d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074: Status 404 returned error can't find the container with id d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074 Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.637312 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.637582 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.137566531 +0000 UTC m=+143.892304824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.723054 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" podStartSLOduration=123.723037609 podStartE2EDuration="2m3.723037609s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.714370099 +0000 UTC m=+143.469108392" watchObservedRunningTime="2026-01-31 14:57:03.723037609 +0000 UTC m=+143.477775902" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.738627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.739025 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.239013549 +0000 UTC m=+143.993751842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.752909 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjjvp" podStartSLOduration=123.752892232 podStartE2EDuration="2m3.752892232s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.750358453 +0000 UTC m=+143.505096746" watchObservedRunningTime="2026-01-31 14:57:03.752892232 +0000 UTC m=+143.507630525" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.839630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.840107 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.340092664 +0000 UTC m=+144.094830947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.849133 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9lvgt" podStartSLOduration=123.849113935 podStartE2EDuration="2m3.849113935s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.846811504 +0000 UTC m=+143.601549797" watchObservedRunningTime="2026-01-31 14:57:03.849113935 +0000 UTC m=+143.603852228" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.850100 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zznr9" podStartSLOduration=122.850090046 podStartE2EDuration="2m2.850090046s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.802521521 +0000 UTC m=+143.557259814" watchObservedRunningTime="2026-01-31 14:57:03.850090046 +0000 UTC m=+143.604828339" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.887311 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.891834 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podStartSLOduration=123.891820479 podStartE2EDuration="2m3.891820479s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:03.890195359 +0000 UTC m=+143.644933652" watchObservedRunningTime="2026-01-31 14:57:03.891820479 +0000 UTC m=+143.646558772" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.898647 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:03 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:03 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:03 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.898712 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:03 crc kubenswrapper[4763]: I0131 14:57:03.942638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:03 crc kubenswrapper[4763]: E0131 14:57:03.943124 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.44311258 +0000 UTC m=+144.197850873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.043741 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.044586 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.544571838 +0000 UTC m=+144.299310121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.100608 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bwc2g" podStartSLOduration=123.100593637 podStartE2EDuration="2m3.100593637s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.098383408 +0000 UTC m=+143.853121701" watchObservedRunningTime="2026-01-31 14:57:04.100593637 +0000 UTC m=+143.855331920" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.101070 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podStartSLOduration=124.101066642 podStartE2EDuration="2m4.101066642s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.044444444 +0000 UTC m=+143.799182737" watchObservedRunningTime="2026-01-31 14:57:04.101066642 +0000 UTC m=+143.855804935" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.133985 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zd45q" podStartSLOduration=125.13397017 podStartE2EDuration="2m5.13397017s" podCreationTimestamp="2026-01-31 14:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.13208898 +0000 UTC m=+143.886827273" watchObservedRunningTime="2026-01-31 14:57:04.13397017 +0000 UTC m=+143.888708463" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.150077 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.150421 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.650409813 +0000 UTC m=+144.405148106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.176113 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-87f9c" podStartSLOduration=123.176094355 podStartE2EDuration="2m3.176094355s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.174555197 +0000 UTC m=+143.929293490" watchObservedRunningTime="2026-01-31 14:57:04.176094355 +0000 UTC m=+143.930832648" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.194973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" event={"ID":"2435f2fb-db3e-4a5a-910d-b2bdebfff9ed","Type":"ContainerStarted","Data":"4b7ac09e1683fd8dabfb4c5b5de8cfe0c575bae9dae1b7e1cae4dde0791c0d82"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.195442 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.213194 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" podStartSLOduration=124.213175862 podStartE2EDuration="2m4.213175862s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.209768236 +0000 UTC m=+143.964506539" watchObservedRunningTime="2026-01-31 14:57:04.213175862 +0000 UTC m=+143.967914155" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.219292 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-75x7z" event={"ID":"ed007985-f681-4a45-a71a-ba27798fa94d","Type":"ContainerStarted","Data":"3b3d27c7780aa1060916843040308355861c6b2ef9a619eff2f2055cd8fd347c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.245682 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" event={"ID":"4c48bb3f-235a-4dcd-ba1a-62f85f8946ac","Type":"ContainerStarted","Data":"8d7be5ea0747f83a9960f08fae5ca06d25aa6560575d596c4a703419afc72e37"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.252925 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.253221 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.753207732 +0000 UTC m=+144.507946025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.268766 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" event={"ID":"bb78095b-d026-498f-9616-d8365161f809","Type":"ContainerStarted","Data":"ea59f192bd69de9016c2dc9d98c422f3e9eb57b32b5ec14b8b4422e72f9c86d0"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.312309 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" event={"ID":"31cdca6f-11b2-4888-9a4c-4b06a94d1863","Type":"ContainerStarted","Data":"e88da184c6479564b9ca190b87ef6ab8b8d18a0af16f5321592d0f027029a41b"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.324606 4763 generic.go:334] "Generic (PLEG): container finished" podID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerID="481c9ea0495b01fa7dccab3a794e907a38c5c8dcc3be73e28b755c124a63743c" exitCode=0 Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.324670 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerDied","Data":"481c9ea0495b01fa7dccab3a794e907a38c5c8dcc3be73e28b755c124a63743c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.328973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bh727" event={"ID":"db0aea6c-f6f8-4548-905b-22d810b334d4","Type":"ContainerStarted","Data":"97a86a7bd66eecb4d677a41a10e65f5357767ed64bb5a9943a51a8d83c59018c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.329891 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.334642 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-nzj54" podStartSLOduration=123.334630784 podStartE2EDuration="2m3.334630784s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.239540815 +0000 UTC m=+143.994279108" watchObservedRunningTime="2026-01-31 14:57:04.334630784 +0000 UTC m=+144.089369077" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.343472 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"d711737b7197c600a56b4679bf5d2fde19b116c0e1902972f711cf31ad72c074"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.355520 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.356788 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.856773006 +0000 UTC m=+144.611511299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.358960 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerStarted","Data":"c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.359727 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.360008 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.371841 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.371915 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.391914 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.401911 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.401960 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.449510 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 14:52:03 +0000 UTC, rotation deadline is 2026-12-15 22:59:32.996754159 +0000 UTC Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.449536 4763 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7640h2m28.547220467s for next certificate rotation Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.455997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.457198 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:04.957180031 +0000 UTC m=+144.711918334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.489044 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerStarted","Data":"310a57aebb3bfd5c0ed0a647bdc7e7ac8b0d23fcde9f6c65a6d7e2d70cf1f25c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.489830 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.525542 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.526663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"0a89a45cd53ac1bcadb3cffc2d778cf2a244867f5fc70e60b74d488a4a697683"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.526975 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.547651 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" event={"ID":"8edb2bab-1e72-4b68-afed-2de0572a1071","Type":"ContainerStarted","Data":"8ac8fcada164283b16fcd66472958b240aadf1c8717ff8aafd5b3e4cca1cd45c"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.559823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.560213 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.060199247 +0000 UTC m=+144.814937540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.564827 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.567263 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.568738 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5kfwr"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.570534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9cb6" event={"ID":"a766d0cf-2406-4406-aaec-51a9da3d6b55","Type":"ContainerStarted","Data":"6b53d085f65466df248693cceb17eeed33350b009fad18662f02f10d66da77f5"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.578979 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5q274"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.585420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" event={"ID":"e043d261-8774-411b-be6d-98dbb1f210a2","Type":"ContainerStarted","Data":"847da603210653d6a6eb4798db67cbcda9a95e127f4d0f0dee083dd1d9416037"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.586291 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.609877 4763 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jtt2l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.609942 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podUID="e043d261-8774-411b-be6d-98dbb1f210a2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.613661 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2jtkm" event={"ID":"65c4b2d3-8915-480e-abf5-3b3e0184f778","Type":"ContainerStarted","Data":"a78dffec60999af2daf653657dc248a422023304b87076476a27df8e3f097cf7"} Jan 31 14:57:04 crc kubenswrapper[4763]: W0131 14:57:04.636045 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c40a34_73d2_4a28_b2bd_31e19e6361d2.slice/crio-1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6 WatchSource:0}: Error finding container 1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6: Status 404 returned error can't find the container with id 1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6 Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.637750 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"ed7d3da6199e8bb4c55e177b1afca8ac78c017a1ea997eff233008f48616b7c8"} Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.637777 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.645945 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gxcjc"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.679640 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.685031 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.685098 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.703211 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.203185511 +0000 UTC m=+144.957923804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.706612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.711256 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podStartSLOduration=123.711232072 podStartE2EDuration="2m3.711232072s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.702868811 +0000 UTC m=+144.457607124" watchObservedRunningTime="2026-01-31 14:57:04.711232072 +0000 UTC m=+144.465970375" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.745745 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l8kn4"] Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.746551 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj"] Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.746966 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.246947728 +0000 UTC m=+145.001686011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.756187 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.777136 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bh727" podStartSLOduration=124.777118349 podStartE2EDuration="2m4.777118349s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.766051454 +0000 UTC m=+144.520789747" watchObservedRunningTime="2026-01-31 14:57:04.777118349 +0000 UTC m=+144.531856642" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.792682 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w9cb6" podStartSLOduration=5.792666365 podStartE2EDuration="5.792666365s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.792350325 +0000 UTC m=+144.547088618" watchObservedRunningTime="2026-01-31 14:57:04.792666365 +0000 UTC m=+144.547404658" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.811233 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.812429 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.312406041 +0000 UTC m=+145.067144334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.883779 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fkt75" podStartSLOduration=123.883763309 podStartE2EDuration="2m3.883763309s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.88154143 +0000 UTC m=+144.636279723" watchObservedRunningTime="2026-01-31 14:57:04.883763309 +0000 UTC m=+144.638501602" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.889637 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:04 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:04 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:04 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.889749 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.917236 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:04 crc kubenswrapper[4763]: E0131 14:57:04.917579 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.417563824 +0000 UTC m=+145.172302117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.982682 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpplg" podStartSLOduration=123.982666397 podStartE2EDuration="2m3.982666397s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.978973332 +0000 UTC m=+144.733711625" watchObservedRunningTime="2026-01-31 14:57:04.982666397 +0000 UTC m=+144.737404690" Jan 31 14:57:04 crc kubenswrapper[4763]: I0131 14:57:04.982830 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" podStartSLOduration=123.982824932 podStartE2EDuration="2m3.982824932s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:04.940631545 +0000 UTC m=+144.695369838" watchObservedRunningTime="2026-01-31 14:57:04.982824932 +0000 UTC m=+144.737563235" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.025900 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.027636 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.52760301 +0000 UTC m=+145.282341303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.093376 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" podStartSLOduration=124.093358083 podStartE2EDuration="2m4.093358083s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.043432654 +0000 UTC m=+144.798170947" watchObservedRunningTime="2026-01-31 14:57:05.093358083 +0000 UTC m=+144.848096376" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.125839 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podStartSLOduration=124.125824607 podStartE2EDuration="2m4.125824607s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.124235187 +0000 UTC m=+144.878973480" watchObservedRunningTime="2026-01-31 14:57:05.125824607 +0000 UTC m=+144.880562900" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.128389 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.128670 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.628660435 +0000 UTC m=+145.383398728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.230180 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.230611 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.730581187 +0000 UTC m=+145.485319480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.332471 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.333221 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.833208401 +0000 UTC m=+145.587946694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.433704 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.434000 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:05.933986008 +0000 UTC m=+145.688724301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.536095 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.536390 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.036379315 +0000 UTC m=+145.791117608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.636817 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.637423 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.137398609 +0000 UTC m=+145.892136892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.637688 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.637996 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.137988808 +0000 UTC m=+145.892727101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.639454 4763 patch_prober.go:28] interesting pod/console-operator-58897d9998-mjbd9 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.639489 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" podUID="d2c4bb39-a442-4316-81a9-d5e8f9d10eaa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.670952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"456d6b1a1346cc108faafe6926b4f9472b7ce543eca0663c104667376f4cf961"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.698420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"82ba72f4fa7df2f9ad6025ec5cbca994e78b07f2d0f1c87ff0e2cb5e8acf9209"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.698464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"d1d40c0ba02382f5beb79752d92ba13fd4d270ccec6ecf859e594ba69a4c07ab"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.738427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.739019 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.239003871 +0000 UTC m=+145.993742164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.755454 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rltbk" event={"ID":"22ff872b-afc9-4fa7-812b-f47bb3add27c","Type":"ContainerStarted","Data":"82b5eae420694e7d1719ccdf851ff890cbaff02f0c8642080ef507bd17f9e608"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.759167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c2c009a175d1429c4cf224d15428e2d57bdfaac9033034c3b7e86bcdd0238516"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.773121 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" event={"ID":"301a24de-a6b1-45a1-a12d-663325e45fd6","Type":"ContainerStarted","Data":"287f4fc003336ab101921f30547c2c01a6faf9dd00bef9fa49e58d548c2d25b7"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.778425 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-c2zm8" event={"ID":"8edb2bab-1e72-4b68-afed-2de0572a1071","Type":"ContainerStarted","Data":"01fa4333d4a253cf405c341595ed734764be15b4283c12766510142007a216d5"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.783252 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" event={"ID":"09c58528-2088-4902-ab32-10cd90be0562","Type":"ContainerStarted","Data":"fb1a983d49894ff0ab18fd981047fa43bf0a6e15b9bdc699979c338f80cca563"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.783289 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" event={"ID":"09c58528-2088-4902-ab32-10cd90be0562","Type":"ContainerStarted","Data":"37ae68a5f4012911c14e0ed16b914774014b1b5d06ac7abb6359499bf76b96b8"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.794059 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q274" event={"ID":"92ef1804-52cd-46a1-86e1-baf561981f8b","Type":"ContainerStarted","Data":"a1fc8efab5c39584667245f36e76d20cdb077bb01f0938d4bd2956edf41c2ec0"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.794103 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5q274" event={"ID":"92ef1804-52cd-46a1-86e1-baf561981f8b","Type":"ContainerStarted","Data":"10a780a402cf35fb0023055561f6e9174995c847c625f8d217f1bb2a9e1ca8f6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.802743 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerStarted","Data":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.803603 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.805049 4763 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pcvn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.805085 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818067 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" event={"ID":"e043d261-8774-411b-be6d-98dbb1f210a2","Type":"ContainerStarted","Data":"c6e9c0d00cddc92097f62e8b77dd4b4fe7135848d8ef3253a47190cdd69ff201"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818727 4763 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jtt2l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.818774 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" podUID="e043d261-8774-411b-be6d-98dbb1f210a2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.825055 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podStartSLOduration=124.825043378 podStartE2EDuration="2m4.825043378s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.167121367 +0000 UTC m=+144.921859660" watchObservedRunningTime="2026-01-31 14:57:05.825043378 +0000 UTC m=+145.579781661" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.827706 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" event={"ID":"7540b5d1-cac8-4c3d-88f1-cd961bf8bd47","Type":"ContainerStarted","Data":"62a66bfd6cc767e7ee618a824626029933567e5c34bce3b55de02aa12eaba356"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839188 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerStarted","Data":"f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839228 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerStarted","Data":"1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.839962 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.840247 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.340237402 +0000 UTC m=+146.094975695 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.855266 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" event={"ID":"229488e3-89a8-4eb4-841e-980db3f8cfb3","Type":"ContainerStarted","Data":"c3a8270916cc07fc87a5bb9ec70366ec16a3ee818b16c81fc196bea7f0ce0a8b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.855807 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.894533 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5q274" podStartSLOduration=6.894515916 podStartE2EDuration="6.894515916s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.894349131 +0000 UTC m=+145.649087424" watchObservedRunningTime="2026-01-31 14:57:05.894515916 +0000 UTC m=+145.649254209" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.900329 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" event={"ID":"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba","Type":"ContainerStarted","Data":"ef3640ba5dfd6ee8c2b57c29cccccc184137dd794ae305f3c34e1425040e217c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.900378 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" event={"ID":"74fa35fc-4a80-4ab6-9c3d-b5eae43c4aba","Type":"ContainerStarted","Data":"8b7e49deeb4b590ab939f5487365f8533b6127d34db0b1012b0ed72a2f526d5b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.901151 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:05 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:05 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:05 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.901187 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.906928 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"5a5f87c9f3f86b2cd1777b046b1ba01a718f729a1b1d1fae362a9b609f49f66d"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914239 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac92922f-89ed-41e7-bf6f-9750efc9cab0" containerID="f9860ce320981dfa3284da2e14b26b648c5f109f25564f3662c4878ec6345fe6" exitCode=0 Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914309 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerStarted","Data":"bcbfe9cd2bfe6b0b1eb74ad0ee4bc4a1cd47e8b3421a3856246fb3b74e353a90"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.914335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" event={"ID":"ac92922f-89ed-41e7-bf6f-9750efc9cab0","Type":"ContainerDied","Data":"f9860ce320981dfa3284da2e14b26b648c5f109f25564f3662c4878ec6345fe6"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.908949 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5s5p" podStartSLOduration=124.908932106 podStartE2EDuration="2m4.908932106s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.82993269 +0000 UTC m=+145.584670983" watchObservedRunningTime="2026-01-31 14:57:05.908932106 +0000 UTC m=+145.663670399" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.916525 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" event={"ID":"f313ecec-c631-4270-a297-51e482e3e306","Type":"ContainerStarted","Data":"00626847b6f2806ccf97368186ec88e0c96ba648c1756555d03911581d4e1a36"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.916556 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" event={"ID":"f313ecec-c631-4270-a297-51e482e3e306","Type":"ContainerStarted","Data":"6ffd71abac5d892e02e137909c1876c77fbdda37cd96e58f3b5b2f19eaabe01c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.917461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" event={"ID":"31cdca6f-11b2-4888-9a4c-4b06a94d1863","Type":"ContainerStarted","Data":"84ede2fa918650a3ab16f7cf7fc4537f323f3b2fa26ed2dfdf2587ace448968c"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.923356 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podStartSLOduration=125.923341497 podStartE2EDuration="2m5.923341497s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.923308056 +0000 UTC m=+145.678046359" watchObservedRunningTime="2026-01-31 14:57:05.923341497 +0000 UTC m=+145.678079790" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.926846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" event={"ID":"e6758143-5085-416e-9bdc-856a520c71de","Type":"ContainerStarted","Data":"123cec0865c0f54326439a7598a17a0d3df451d9acbfa2dc9a65f116dbb9ed57"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.944348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:05 crc kubenswrapper[4763]: E0131 14:57:05.945329 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.445309152 +0000 UTC m=+146.200047445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.952678 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jj6qz" podStartSLOduration=125.952662302 podStartE2EDuration="2m5.952662302s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.951097954 +0000 UTC m=+145.705836247" watchObservedRunningTime="2026-01-31 14:57:05.952662302 +0000 UTC m=+145.707400595" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.961763 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" event={"ID":"1dc5ca38-64fe-41f8-a989-0b035bf29414","Type":"ContainerStarted","Data":"32e7a943841330c0d61412a6e339704c21b1a02faa72959de721c41156cfd620"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.961809 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" event={"ID":"1dc5ca38-64fe-41f8-a989-0b035bf29414","Type":"ContainerStarted","Data":"2bfd671e8be6466ee4787ab40466107dd31d44c964fd6dd13fff675b9ba5b71b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969345 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.969405 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.970835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" event={"ID":"a7826828-7856-44a4-be9f-f1a939950c3e","Type":"ContainerStarted","Data":"510b0d29eff4a9d0163deefbfdaa8c49369e26b47e5cbee8f26b1a4e7c32c84b"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.970869 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" event={"ID":"a7826828-7856-44a4-be9f-f1a939950c3e","Type":"ContainerStarted","Data":"fddd83b22bce90c8677da04cc954fae7fac22d84fd2ad39e5e28efa0cfbe964e"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.973417 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" event={"ID":"84d11c6f-169b-4e21-87ec-8bb8930a1831","Type":"ContainerStarted","Data":"d42738e8b3360dca7ad3b81163d6097426276836ef34fbfd4844187a0b964616"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.973439 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" event={"ID":"84d11c6f-169b-4e21-87ec-8bb8930a1831","Type":"ContainerStarted","Data":"aa78659a0d31dbb475b1238e21a6de3fb7caa06077698a581b59b799d15e1656"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.974007 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.975102 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.975129 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.976403 4763 generic.go:334] "Generic (PLEG): container finished" podID="330d3fd9-790f-406d-a122-152a1ab07e5c" containerID="9d8f1179a12c329107a54a8a722d4357264cb30c5b2390643df7556521ff5d6f" exitCode=0 Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.976472 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerDied","Data":"9d8f1179a12c329107a54a8a722d4357264cb30c5b2390643df7556521ff5d6f"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.977630 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2bx5m" podStartSLOduration=125.977614532 podStartE2EDuration="2m5.977614532s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:05.975481085 +0000 UTC m=+145.730219378" watchObservedRunningTime="2026-01-31 14:57:05.977614532 +0000 UTC m=+145.732352825" Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.991317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" event={"ID":"f6f56211-548f-4d20-9c0a-70108a8f557b","Type":"ContainerStarted","Data":"52b3223f73d8c49aceeccc88ac29abed3505575d49692c316c27ec46e81800d4"} Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.992670 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:05 crc kubenswrapper[4763]: I0131 14:57:05.992838 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.005782 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9zjqh" podStartSLOduration=125.00576743 podStartE2EDuration="2m5.00576743s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.003947613 +0000 UTC m=+145.758685906" watchObservedRunningTime="2026-01-31 14:57:06.00576743 +0000 UTC m=+145.760505723" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.008187 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.043513 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" podStartSLOduration=126.043494779 podStartE2EDuration="2m6.043494779s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.039208765 +0000 UTC m=+145.793947058" watchObservedRunningTime="2026-01-31 14:57:06.043494779 +0000 UTC m=+145.798233072" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.046003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.047370 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.547357838 +0000 UTC m=+146.302096131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.123785 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z8wss" podStartSLOduration=125.123766594 podStartE2EDuration="2m5.123766594s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.123439934 +0000 UTC m=+145.878178227" watchObservedRunningTime="2026-01-31 14:57:06.123766594 +0000 UTC m=+145.878504887" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.125252 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" podStartSLOduration=125.12524589 podStartE2EDuration="2m5.12524589s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.095991197 +0000 UTC m=+145.850729490" watchObservedRunningTime="2026-01-31 14:57:06.12524589 +0000 UTC m=+145.879984183" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.147246 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.148514 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.648500657 +0000 UTC m=+146.403238950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.207963 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podStartSLOduration=126.207945192 podStartE2EDuration="2m6.207945192s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.161270426 +0000 UTC m=+145.916008719" watchObservedRunningTime="2026-01-31 14:57:06.207945192 +0000 UTC m=+145.962683485" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.208912 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podStartSLOduration=125.208906863 podStartE2EDuration="2m5.208906863s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.206396124 +0000 UTC m=+145.961134417" watchObservedRunningTime="2026-01-31 14:57:06.208906863 +0000 UTC m=+145.963645156" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.249628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.249953 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.749941914 +0000 UTC m=+146.504680207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.279827 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-8gxz5" podStartSLOduration=125.279811826 podStartE2EDuration="2m5.279811826s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.277281668 +0000 UTC m=+146.032019971" watchObservedRunningTime="2026-01-31 14:57:06.279811826 +0000 UTC m=+146.034550109" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.337576 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-j7kx4" podStartSLOduration=125.3375616 podStartE2EDuration="2m5.3375616s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.336078593 +0000 UTC m=+146.090816886" watchObservedRunningTime="2026-01-31 14:57:06.3375616 +0000 UTC m=+146.092299893" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.358662 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.359136 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.859117702 +0000 UTC m=+146.613855995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.431319 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dncp9" podStartSLOduration=125.431304957 podStartE2EDuration="2m5.431304957s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:06.430284784 +0000 UTC m=+146.185023087" watchObservedRunningTime="2026-01-31 14:57:06.431304957 +0000 UTC m=+146.186043250" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.460512 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.461234 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:06.9612134 +0000 UTC m=+146.715951753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.562187 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.562516 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.062496202 +0000 UTC m=+146.817234495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.664111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.664453 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.164427265 +0000 UTC m=+146.919165558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.765353 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.765657 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.265643204 +0000 UTC m=+147.020381497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.866640 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.867086 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.366952428 +0000 UTC m=+147.121690721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.886336 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:06 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:06 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:06 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.886394 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.968055 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.968192 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.468174588 +0000 UTC m=+147.222912881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:06 crc kubenswrapper[4763]: I0131 14:57:06.968269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:06 crc kubenswrapper[4763]: E0131 14:57:06.968550 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.468542459 +0000 UTC m=+147.223280752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.003755 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c367a26e2ece0070291348bc7efa7b56efa140c5b2ed83cb252a5a15f57c810a"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.005281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" event={"ID":"301a24de-a6b1-45a1-a12d-663325e45fd6","Type":"ContainerStarted","Data":"91fb83c7a4ae79d09f15b0ccba3ad87c567a9a7d0bb06a687765d025b7cee5f3"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.005678 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007280 4763 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fhtj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007322 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podUID="301a24de-a6b1-45a1-a12d-663325e45fd6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.007349 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" event={"ID":"e6758143-5085-416e-9bdc-856a520c71de","Type":"ContainerStarted","Data":"0dc0630ff026b9c2e13627d0fe50273a3b95b3aa41e0eec755870b558d1668bd"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.019986 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"a76ce5d3db29e4f9da884517c12ae377269007818566deb9c9386ee1cd4fd25a"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.023048 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" event={"ID":"6f2d8117-c3e5-498f-8458-e72238d0f0ac","Type":"ContainerStarted","Data":"7159a82b753e9850cb63bff88d97c6e1ade82629304fe20a1011bf3aee2de73b"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.023154 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.030390 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podStartSLOduration=126.030362729 podStartE2EDuration="2m6.030362729s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.027413377 +0000 UTC m=+146.782151740" watchObservedRunningTime="2026-01-31 14:57:07.030362729 +0000 UTC m=+146.785101062" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.034676 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" event={"ID":"80834dac-7e21-4dda-8f32-3a19eced5753","Type":"ContainerStarted","Data":"5e181f6da09fdf74a70e41b344a2b9b02b08025edaf9d734ed2b557ab70a6604"} Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035544 4763 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-8pcvn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035609 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035900 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.035966 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036314 4763 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-flcgf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036367 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036931 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.036974 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.040044 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.040804 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.054269 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gxcjc" podStartSLOduration=126.054240265 podStartE2EDuration="2m6.054240265s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.043041025 +0000 UTC m=+146.797779338" watchObservedRunningTime="2026-01-31 14:57:07.054240265 +0000 UTC m=+146.808978598" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.067786 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" podStartSLOduration=126.067757947 podStartE2EDuration="2m6.067757947s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.064947249 +0000 UTC m=+146.819685602" watchObservedRunningTime="2026-01-31 14:57:07.067757947 +0000 UTC m=+146.822496270" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.070292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.070404 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.570388879 +0000 UTC m=+147.325127172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.071530 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.074651 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.574633822 +0000 UTC m=+147.329372125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.075575 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jtt2l" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.082333 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.083637 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-852vg" podStartSLOduration=126.083617562 podStartE2EDuration="2m6.083617562s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:07.082092904 +0000 UTC m=+146.836831207" watchObservedRunningTime="2026-01-31 14:57:07.083617562 +0000 UTC m=+146.838355855" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084104 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084223 4763 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nwfnl container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.084267 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" podUID="ac92922f-89ed-41e7-bf6f-9750efc9cab0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.173006 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.173201 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.673175778 +0000 UTC m=+147.427914071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.173330 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.173673 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.673661454 +0000 UTC m=+147.428399747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.274858 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.275082 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.775049329 +0000 UTC m=+147.529787642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.275200 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.275504 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.775489572 +0000 UTC m=+147.530227875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.377388 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.377550 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.877515468 +0000 UTC m=+147.632253791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.378465 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.378984 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.878955983 +0000 UTC m=+147.633694306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.483997 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.484431 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:07.984413716 +0000 UTC m=+147.739152019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.593592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.595974 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.095954618 +0000 UTC m=+147.850692921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.696561 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.696932 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.19691345 +0000 UTC m=+147.951651753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.696995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.697307 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.197298303 +0000 UTC m=+147.952036606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.800682 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.800861 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.300831625 +0000 UTC m=+148.055569918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.801008 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.801352 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.301340901 +0000 UTC m=+148.056079194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.886400 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:07 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:07 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:07 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.886457 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.902016 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.902200 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.402175439 +0000 UTC m=+148.156913732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.902283 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:07 crc kubenswrapper[4763]: E0131 14:57:07.902586 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.402576921 +0000 UTC m=+148.157315314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999329 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999386 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999438 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:07 crc kubenswrapper[4763]: I0131 14:57:07.999387 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003128 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.003385 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.503349058 +0000 UTC m=+148.258087361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003464 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003495 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003531 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.003557 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.003887 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.503877445 +0000 UTC m=+148.258615738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.012553 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.018744 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.019492 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.041783 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l8kn4" event={"ID":"1113d5ad-40c9-412f-92c2-2fb0d6ec2903","Type":"ContainerStarted","Data":"c23f7930b2a127978fd2f7d73e226c0347d2c69ae39fe7fe4658ddd996198d69"} Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.041888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.043600 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" event={"ID":"330d3fd9-790f-406d-a122-152a1ab07e5c","Type":"ContainerStarted","Data":"7243cb12415fcd1c0a6c4f36767fe7bdcd431b37d26d6d60cf2dd182417f6a09"} Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044364 4763 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-5zjq4 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044399 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" podUID="229488e3-89a8-4eb4-841e-980db3f8cfb3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044568 4763 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4fhtj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.044594 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" podUID="301a24de-a6b1-45a1-a12d-663325e45fd6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.051127 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.059423 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l8kn4" podStartSLOduration=9.059408688 podStartE2EDuration="9.059408688s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:08.058056756 +0000 UTC m=+147.812795049" watchObservedRunningTime="2026-01-31 14:57:08.059408688 +0000 UTC m=+147.814146971" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.071805 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.085361 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.108869 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.109348 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.110167 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.610151023 +0000 UTC m=+148.364889316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.123563 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.130875 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" podStartSLOduration=128.130858969 podStartE2EDuration="2m8.130858969s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:08.129837957 +0000 UTC m=+147.884576260" watchObservedRunningTime="2026-01-31 14:57:08.130858969 +0000 UTC m=+147.885597272" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.211523 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.211947 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.71192749 +0000 UTC m=+148.466665773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.313972 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.314300 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.814282195 +0000 UTC m=+148.569020488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.365418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.415672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.417096 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:08.917077895 +0000 UTC m=+148.671816188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.517367 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.517682 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.017666576 +0000 UTC m=+148.772404869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: W0131 14:57:08.545908 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e WatchSource:0}: Error finding container ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e: Status 404 returned error can't find the container with id ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.618425 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.618825 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.118810354 +0000 UTC m=+148.873548657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.719771 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.720136 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.220121556 +0000 UTC m=+148.974859849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.821582 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.821981 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.321965186 +0000 UTC m=+149.076703479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.885852 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:08 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:08 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:08 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.885906 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:08 crc kubenswrapper[4763]: I0131 14:57:08.923115 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:08 crc kubenswrapper[4763]: E0131 14:57:08.923388 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.423372462 +0000 UTC m=+149.178110745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.024283 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.024562 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.524550702 +0000 UTC m=+149.279288995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.044185 4763 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f7fgc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.044248 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" podUID="84d11c6f-169b-4e21-87ec-8bb8930a1831" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.086923 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e49920ad6b55837fe5419ae6a0abfcef0e7f59c2433444f4113eec2ac2726755"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.096239 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"be4828a0830ba7273db1a424ab41466f911c98d49e719ac495916e7ffc4ef017"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.105979 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"842f4d39e3f908ad742c890ec35eea76f854bc8df8197212137ef41d421444de"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.106028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad96203a09d297b43e1fc4986819a7e20491515093b726af6deba113ecd7397e"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.125223 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.125531 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.625517314 +0000 UTC m=+149.380255607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.137518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4806868c4c8273ef6c9fddceefbc84aea9c3e9b1854132c4349aa8e4ec08bbf9"} Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.227109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.228374 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.728330394 +0000 UTC m=+149.483068687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.330964 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.331088 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.831069032 +0000 UTC m=+149.585807325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.331238 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.331494 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.831485955 +0000 UTC m=+149.586224248 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.432422 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.432568 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.93254606 +0000 UTC m=+149.687284353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.432600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.432890 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:09.93288227 +0000 UTC m=+149.687620563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.533949 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.534137 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.034109301 +0000 UTC m=+149.788847594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.534269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.534625 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.034613317 +0000 UTC m=+149.789351670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.635845 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.636031 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.136005872 +0000 UTC m=+149.890744165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.636373 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.636683 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.136674583 +0000 UTC m=+149.891412876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.691845 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.692405 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.694303 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.694724 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.704118 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737374 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737518 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737542 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.737623 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.237596794 +0000 UTC m=+149.992335087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.737954 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.738307 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.238298926 +0000 UTC m=+149.993037219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838662 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.838824 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.338798274 +0000 UTC m=+150.093536557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838909 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838939 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.838958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.839068 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.839261 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.339253518 +0000 UTC m=+150.093991811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.862443 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.888252 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:09 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:09 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:09 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.888329 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.940545 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.940729 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.440687975 +0000 UTC m=+150.195426288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:09 crc kubenswrapper[4763]: I0131 14:57:09.940893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:09 crc kubenswrapper[4763]: E0131 14:57:09.941231 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.441221071 +0000 UTC m=+150.195959364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.008714 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.042164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.042493 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.542479733 +0000 UTC m=+150.297218026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.143379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.143904 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.643887739 +0000 UTC m=+150.398626032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.146925 4763 generic.go:334] "Generic (PLEG): container finished" podID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerID="f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5" exitCode=0 Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.147031 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerDied","Data":"f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.148117 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1f811be19ca53a5eda82c567dd9bc8aa76845bc4c4c6a6582f66148ff0509f86"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.159661 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c8e361e95ea073b25fd5b3ea16920e9769595ab3cc814b162ab859a83be58d2"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.160362 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.179452 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"06b21accff0c26d9d7dc03c0bc3ef51d7353a0422734ec9b7f8e2fd5290a1778"} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.211415 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.244232 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.245031 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.745017126 +0000 UTC m=+150.499755419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.314449 4763 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.345414 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.345890 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.845867075 +0000 UTC m=+150.600605438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.446640 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.446876 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.946845058 +0000 UTC m=+150.701583351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.447021 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.447464 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:57:10.947451287 +0000 UTC m=+150.702189580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dzr7c" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.548383 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: E0131 14:57:10.548801 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:57:11.04877718 +0000 UTC m=+150.803515473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.613939 4763 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T14:57:10.314662611Z","Handler":null,"Name":""} Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.620970 4763 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.621002 4763 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.650418 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.652713 4763 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.652748 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.678316 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dzr7c\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.751542 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.758893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.886206 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:10 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:10 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:10 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.886260 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.896226 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.988306 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.990074 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:10 crc kubenswrapper[4763]: I0131 14:57:10.997216 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.010108 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.011471 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5zjq4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056403 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056507 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.056643 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.070074 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.152269 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157754 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157805 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.157836 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.158428 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.158627 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.184065 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"certified-operators-k6ddv\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.184890 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.186992 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.191036 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193460 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193508 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerStarted","Data":"1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.193531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerStarted","Data":"a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.194972 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerStarted","Data":"f6421d1d39f19dfe9997df0c879a0f9ff7802342de47df550a2b31d059ccd341"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.214638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"1736498c7cf56f7eafa34020b6b340b116143234e02c496c789677a4f788cf2f"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.214685 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" event={"ID":"d73a5142-56cf-4676-a6f1-a00868938c4d","Type":"ContainerStarted","Data":"981572eb6b8d330cda41bf41133a89d23b6eb9a0cdf02e907c24827ab4badccd"} Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.218497 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.2184831799999998 podStartE2EDuration="2.21848318s" podCreationTimestamp="2026-01-31 14:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:11.217664664 +0000 UTC m=+150.972402957" watchObservedRunningTime="2026-01-31 14:57:11.21848318 +0000 UTC m=+150.973221473" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.237122 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" podStartSLOduration=12.237108062 podStartE2EDuration="12.237108062s" podCreationTimestamp="2026-01-31 14:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:11.234894862 +0000 UTC m=+150.989633175" watchObservedRunningTime="2026-01-31 14:57:11.237108062 +0000 UTC m=+150.991846355" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259159 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259417 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.259560 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.323052 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361400 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361585 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.361606 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.362390 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.363123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.380547 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.380576 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.384462 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"community-operators-9df4p\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.385523 4763 patch_prober.go:28] interesting pod/console-f9d7485db-9lvgt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.387440 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lvgt" podUID="0cddc243-3a83-4398-87a9-7a111581bec5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.397304 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.398240 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.407477 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.438742 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.468656 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.468984 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.469013 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570594 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570780 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") pod \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\" (UID: \"20c40a34-73d2-4a28-b2bd-31e19e6361d2\") " Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570920 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570963 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.570992 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.571833 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.572054 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.572058 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume" (OuterVolumeSpecName: "config-volume") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.578220 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.583126 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w" (OuterVolumeSpecName: "kube-api-access-4gq7w") pod "20c40a34-73d2-4a28-b2bd-31e19e6361d2" (UID: "20c40a34-73d2-4a28-b2bd-31e19e6361d2"). InnerVolumeSpecName "kube-api-access-4gq7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587456 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:11 crc kubenswrapper[4763]: E0131 14:57:11.587632 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587642 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.587761 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" containerName="collect-profiles" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.588365 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593009 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593045 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593106 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.593144 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.598187 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"certified-operators-4m6qg\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.600936 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.647833 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.671955 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672007 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672031 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672061 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20c40a34-73d2-4a28-b2bd-31e19e6361d2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672073 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gq7w\" (UniqueName: \"kubernetes.io/projected/20c40a34-73d2-4a28-b2bd-31e19e6361d2-kube-api-access-4gq7w\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.672082 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20c40a34-73d2-4a28-b2bd-31e19e6361d2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.683358 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.713942 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.772941 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.772995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773019 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.773850 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.800706 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"community-operators-mr7l4\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.841090 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mjbd9" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.884889 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.888219 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:11 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:11 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:11 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.888273 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.908734 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.944084 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 14:57:11 crc kubenswrapper[4763]: I0131 14:57:11.999668 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.006609 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.006657 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.017873 4763 patch_prober.go:28] interesting pod/apiserver-76f77b778f-tv9s8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]log ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]etcd ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/max-in-flight-filter ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 14:57:12 crc kubenswrapper[4763]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/openshift.io-startinformers ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 14:57:12 crc kubenswrapper[4763]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 14:57:12 crc kubenswrapper[4763]: livez check failed Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.017929 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" podUID="330d3fd9-790f-406d-a122-152a1ab07e5c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.093761 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.108146 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nwfnl" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.227650 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerStarted","Data":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.227859 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232378 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232517 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.232549 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"6f670464716ecf8ab5d99a2382a3bcaf7162a13bd03fa816cb2c7b4734ade299"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.242006 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.244910 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.245455 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.247006 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.247574 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerStarted","Data":"9c7dca8f63ce2a8f4eb26e014c54162f55c4578fef6f425b844a6c85dc4561db"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.249172 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" podStartSLOduration=131.249153459 podStartE2EDuration="2m11.249153459s" podCreationTimestamp="2026-01-31 14:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:12.246811576 +0000 UTC m=+152.001549869" watchObservedRunningTime="2026-01-31 14:57:12.249153459 +0000 UTC m=+152.003891752" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.260758 4763 generic.go:334] "Generic (PLEG): container finished" podID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerID="1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.260819 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerDied","Data":"1c3251daea8c98cbe723a2aab08e84229fbcd9000a33f6fdb2dfc2f339c11130"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265521 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" exitCode=0 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265586 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.265608 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerStarted","Data":"1eaa5c467faffeb2ba7ad8dc241225ca0c8240c2cf3a8e19cde7c5ee1bfecc47"} Jan 31 14:57:12 crc kubenswrapper[4763]: W0131 14:57:12.269325 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a48e2e_604d_4cd9_b0c8_a290f4a81ffa.slice/crio-29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372 WatchSource:0}: Error finding container 29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372: Status 404 returned error can't find the container with id 29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372 Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.270297 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.275762 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989" event={"ID":"20c40a34-73d2-4a28-b2bd-31e19e6361d2","Type":"ContainerDied","Data":"1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6"} Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.275825 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d970cab673873dbd0ffb836dc82303a0a52ef1bbbe7617fdf76121fb69fc8f6" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.283239 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.299334 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f7fgc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.325560 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4fhtj" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.336955 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.337559 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.339678 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.341862 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.342131 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.380339 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.380443 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.483734 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.484543 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.484621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.519424 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.722019 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.904761 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:12 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:12 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.905163 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:12 crc kubenswrapper[4763]: I0131 14:57:12.996892 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:57:13 crc kubenswrapper[4763]: W0131 14:57:13.010622 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e7ac98e_5206_404b_a648_3ef3d778619c.slice/crio-ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50 WatchSource:0}: Error finding container ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50: Status 404 returned error can't find the container with id ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50 Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.182620 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.183583 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.185401 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.193185 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.276119 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerStarted","Data":"ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279175 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09" exitCode=0 Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279291 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.279333 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerStarted","Data":"29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372"} Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293492 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293548 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.293589 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.394875 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.395052 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.395103 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.397553 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.398444 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.436870 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"redhat-marketplace-wnmmq\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.499285 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.515485 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578120 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:13 crc kubenswrapper[4763]: E0131 14:57:13.578590 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578734 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.578932 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e5646c-e6be-4d7c-9839-540a69daf0e9" containerName="pruner" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.579828 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.592790 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596339 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") pod \"49e5646c-e6be-4d7c-9839-540a69daf0e9\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596492 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") pod \"49e5646c-e6be-4d7c-9839-540a69daf0e9\" (UID: \"49e5646c-e6be-4d7c-9839-540a69daf0e9\") " Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.596809 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49e5646c-e6be-4d7c-9839-540a69daf0e9" (UID: "49e5646c-e6be-4d7c-9839-540a69daf0e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.602288 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49e5646c-e6be-4d7c-9839-540a69daf0e9" (UID: "49e5646c-e6be-4d7c-9839-540a69daf0e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697599 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697923 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.697994 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.698040 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49e5646c-e6be-4d7c-9839-540a69daf0e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.698061 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49e5646c-e6be-4d7c-9839-540a69daf0e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799002 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799088 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.799610 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.800102 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.816284 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"redhat-marketplace-22bkm\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.888462 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:13 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:13 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:13 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.888665 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.903910 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:57:13 crc kubenswrapper[4763]: I0131 14:57:13.983892 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.019834 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2434f0b9_846a_444c_b487_745d4010002b.slice/crio-3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802 WatchSource:0}: Error finding container 3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802: Status 404 returned error can't find the container with id 3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.122663 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.133258 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e24d612_62ed_4bd5_8e07_889710d16851.slice/crio-c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce WatchSource:0}: Error finding container c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce: Status 404 returned error can't find the container with id c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.177409 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.177461 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.181048 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.182147 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.185816 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.191466 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210255 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210557 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.210605 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289282 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" exitCode=0 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289763 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.289816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerStarted","Data":"3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.293808 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerStarted","Data":"c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.295981 4763 generic.go:334] "Generic (PLEG): container finished" podID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerID="47869dfdc0f5a04213f8f03ac61c750dc9ff1cfcf0748564443776a01410c432" exitCode=0 Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.296028 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerDied","Data":"47869dfdc0f5a04213f8f03ac61c750dc9ff1cfcf0748564443776a01410c432"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.303891 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"49e5646c-e6be-4d7c-9839-540a69daf0e9","Type":"ContainerDied","Data":"a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925"} Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.303933 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0bc2a82642ee311d6552a5e94a028bf0431b120d860d1f88eae5e6a24094925" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.304008 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312371 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312467 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.312512 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.313385 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.313945 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.335142 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"redhat-operators-wxzg8\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.497105 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.578112 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.580781 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.591262 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616607 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616662 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.616685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.717915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718296 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718325 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718582 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.718983 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.738875 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"redhat-operators-4bfmh\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.787709 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 14:57:14 crc kubenswrapper[4763]: W0131 14:57:14.796389 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f3cc890_2041_4983_8501_088c40c22b77.slice/crio-2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b WatchSource:0}: Error finding container 2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b: Status 404 returned error can't find the container with id 2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.886857 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:14 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:14 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:14 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.886907 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:14 crc kubenswrapper[4763]: I0131 14:57:14.904608 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.316236 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" exitCode=0 Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.316335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f"} Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.319581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerStarted","Data":"2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b"} Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.414355 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:57:15 crc kubenswrapper[4763]: W0131 14:57:15.440369 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod463b0d45_1b3b_46a1_afbd_650fa065b38f.slice/crio-64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9 WatchSource:0}: Error finding container 64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9: Status 404 returned error can't find the container with id 64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9 Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.636151 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.753361 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") pod \"5e7ac98e-5206-404b-a648-3ef3d778619c\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.753481 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") pod \"5e7ac98e-5206-404b-a648-3ef3d778619c\" (UID: \"5e7ac98e-5206-404b-a648-3ef3d778619c\") " Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.754989 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e7ac98e-5206-404b-a648-3ef3d778619c" (UID: "5e7ac98e-5206-404b-a648-3ef3d778619c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.767839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e7ac98e-5206-404b-a648-3ef3d778619c" (UID: "5e7ac98e-5206-404b-a648-3ef3d778619c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.855386 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e7ac98e-5206-404b-a648-3ef3d778619c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.855427 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e7ac98e-5206-404b-a648-3ef3d778619c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.887077 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:15 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:15 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:15 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:15 crc kubenswrapper[4763]: I0131 14:57:15.887137 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328022 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e7ac98e-5206-404b-a648-3ef3d778619c","Type":"ContainerDied","Data":"ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328046 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.328059 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb43162b5b5c5ccf5049c1ecc38ca80a82b1da0bd3e6319c633833964c09b50" Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330128 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" exitCode=0 Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330194 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.330222 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerStarted","Data":"64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.332075 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" exitCode=0 Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.332098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18"} Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.884832 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:16 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:16 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:16 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:16 crc kubenswrapper[4763]: I0131 14:57:16.884906 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.013168 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.018900 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tv9s8" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.372208 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l8kn4" Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.885095 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:17 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:17 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:17 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:17 crc kubenswrapper[4763]: I0131 14:57:17.885160 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:18 crc kubenswrapper[4763]: I0131 14:57:18.886065 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:18 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:18 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:18 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:18 crc kubenswrapper[4763]: I0131 14:57:18.886148 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:19 crc kubenswrapper[4763]: I0131 14:57:19.885154 4763 patch_prober.go:28] interesting pod/router-default-5444994796-87f9c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:57:19 crc kubenswrapper[4763]: [-]has-synced failed: reason withheld Jan 31 14:57:19 crc kubenswrapper[4763]: [+]process-running ok Jan 31 14:57:19 crc kubenswrapper[4763]: healthz check failed Jan 31 14:57:19 crc kubenswrapper[4763]: I0131 14:57:19.885426 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-87f9c" podUID="47ac991e-3a26-4da1-9cf0-6f0944a3bf7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:57:20 crc kubenswrapper[4763]: I0131 14:57:20.886075 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:20 crc kubenswrapper[4763]: I0131 14:57:20.889185 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-87f9c" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.380933 4763 patch_prober.go:28] interesting pod/console-f9d7485db-9lvgt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.381312 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lvgt" podUID="0cddc243-3a83-4398-87a9-7a111581bec5" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.593541 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.594055 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.593580 4763 patch_prober.go:28] interesting pod/downloads-7954f5f757-bh727 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 31 14:57:21 crc kubenswrapper[4763]: I0131 14:57:21.594539 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bh727" podUID="db0aea6c-f6f8-4548-905b-22d810b334d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.070193 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.081045 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84302428-88e1-47ba-84cc-7d12472f9aa2-metrics-certs\") pod \"network-metrics-daemon-26pm5\" (UID: \"84302428-88e1-47ba-84cc-7d12472f9aa2\") " pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:23 crc kubenswrapper[4763]: I0131 14:57:23.379718 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-26pm5" Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.908934 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.909196 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" containerID="cri-o://b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" gracePeriod=30 Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.912843 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:25 crc kubenswrapper[4763]: I0131 14:57:25.913111 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" containerID="cri-o://c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" gracePeriod=30 Jan 31 14:57:30 crc kubenswrapper[4763]: I0131 14:57:30.902572 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.287935 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.287998 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.384628 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.388128 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9lvgt" Jan 31 14:57:31 crc kubenswrapper[4763]: I0131 14:57:31.605965 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bh727" Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.049581 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.050160 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.435403 4763 generic.go:334] "Generic (PLEG): container finished" podID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerID="c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" exitCode=0 Jan 31 14:57:32 crc kubenswrapper[4763]: I0131 14:57:32.435473 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerDied","Data":"c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a"} Jan 31 14:57:33 crc kubenswrapper[4763]: I0131 14:57:33.446666 4763 generic.go:334] "Generic (PLEG): container finished" podID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerID="b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" exitCode=0 Jan 31 14:57:33 crc kubenswrapper[4763]: I0131 14:57:33.446800 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerDied","Data":"b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1"} Jan 31 14:57:41 crc kubenswrapper[4763]: I0131 14:57:41.287082 4763 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bpxtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:57:41 crc kubenswrapper[4763]: I0131 14:57:41.287885 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:57:42 crc kubenswrapper[4763]: I0131 14:57:42.586216 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l8wxr" Jan 31 14:57:43 crc kubenswrapper[4763]: I0131 14:57:43.049632 4763 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mpmpg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:57:43 crc kubenswrapper[4763]: I0131 14:57:43.049722 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:57:44 crc kubenswrapper[4763]: I0131 14:57:44.177396 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:57:44 crc kubenswrapper[4763]: I0131 14:57:44.177470 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.965936 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.966584 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzr25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wnmmq_openshift-marketplace(2434f0b9-846a-444c-b487-745d4010002b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\": context canceled" logger="UnhandledError" Jan 31 14:57:46 crc kubenswrapper[4763]: E0131 14:57:46.967896 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:d66a84cc704878f5e58a60c449eb4244b9e250105c614dae1d2418e90b51befa\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.009249 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.009436 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk9ln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k6ddv_openshift-marketplace(b8a35a73-67a0-4bb4-9954-46350d31b017): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.010722 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.033803 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092514 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.092800 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092815 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.092828 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092836 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092969 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" containerName="route-controller-manager" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.092987 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7ac98e-5206-404b-a648-3ef3d778619c" containerName="pruner" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.093382 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.093468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.097680 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.097862 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4rwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-mr7l4_openshift-marketplace(f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.099031 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.118992 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.119140 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq9nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4m6qg_openshift-marketplace(5a85c02e-9d6e-4d11-be81-242bf4fee8c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.120550 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.202077 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.203037 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrvp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9df4p_openshift-marketplace(5c097873-7ca4-491d-86c4-31b2ab99d63d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:47 crc kubenswrapper[4763]: E0131 14:57:47.204903 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.220061 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.220125 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221427 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config" (OuterVolumeSpecName: "config") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221469 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221533 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.221563 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") pod \"e1b409a5-8274-478d-98bf-fe2171d90c63\" (UID: \"e1b409a5-8274-478d-98bf-fe2171d90c63\") " Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222684 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222958 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.222978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223015 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223070 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.223082 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1b409a5-8274-478d-98bf-fe2171d90c63-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.238666 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv" (OuterVolumeSpecName: "kube-api-access-v7cfv") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "kube-api-access-v7cfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.259543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1b409a5-8274-478d-98bf-fe2171d90c63" (UID: "e1b409a5-8274-478d-98bf-fe2171d90c63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323722 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.323870 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.324245 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7cfv\" (UniqueName: \"kubernetes.io/projected/e1b409a5-8274-478d-98bf-fe2171d90c63-kube-api-access-v7cfv\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325856 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1b409a5-8274-478d-98bf-fe2171d90c63-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325653 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.325491 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.343567 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.353207 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"route-controller-manager-6d78d477f7-cmh4k\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.431635 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541795 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg" event={"ID":"e1b409a5-8274-478d-98bf-fe2171d90c63","Type":"ContainerDied","Data":"7b560d0bfea717d413c6f7f997b55322350cc5d7f6770006679df4dc57bee56a"} Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.541950 4763 scope.go:117] "RemoveContainer" containerID="c58f9ab4796cf505743aa587609be5cc9b4b478501a00ba9268fe2ccf4583f9a" Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.552723 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-26pm5"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.666970 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:47 crc kubenswrapper[4763]: I0131 14:57:47.669074 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mpmpg"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.318431 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.319354 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.325785 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.326161 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.332218 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.375011 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.439166 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.439205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541220 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541303 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.541358 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.557026 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:48 crc kubenswrapper[4763]: I0131 14:57:48.674760 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.048988 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b409a5-8274-478d-98bf-fe2171d90c63" path="/var/lib/kubelet/pods/e1b409a5-8274-478d-98bf-fe2171d90c63/volumes" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.423826 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.423981 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424082 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424173 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.424576 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.474926 4763 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.475092 4763 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-22bkm_openshift-marketplace(1e24d612-62ed-4bd5-8e07-889710d16851): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.476470 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.529624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556313 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.556641 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556663 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.556827 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" containerName="controller-manager" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.557609 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560455 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" event={"ID":"b67dedfb-accc-467d-a3bb-508eab4f88c8","Type":"ContainerDied","Data":"5825be7f5b0a2372a0714ff20d5e467974a43da635470237d607739086eb1094"} Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560520 4763 scope.go:117] "RemoveContainer" containerID="b8fb7a2f200b2701b2184c4c721af8933812270f9a51bf230ed65be2f7cd5bd1" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.560594 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bpxtg" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.565058 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.584906 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"fd29a6a06684bc8f2ef118669fc48ea72399bc351cac7924ed1d06f13faf06d9"} Jan 31 14:57:49 crc kubenswrapper[4763]: E0131 14:57:49.586945 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656605 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656742 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656797 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656829 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.656872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") pod \"b67dedfb-accc-467d-a3bb-508eab4f88c8\" (UID: \"b67dedfb-accc-467d-a3bb-508eab4f88c8\") " Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658224 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658428 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config" (OuterVolumeSpecName: "config") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.658782 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.664121 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.667139 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj" (OuterVolumeSpecName: "kube-api-access-j5csj") pod "b67dedfb-accc-467d-a3bb-508eab4f88c8" (UID: "b67dedfb-accc-467d-a3bb-508eab4f88c8"). InnerVolumeSpecName "kube-api-access-j5csj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.703338 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.746044 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:57:49 crc kubenswrapper[4763]: W0131 14:57:49.748526 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod985646ce_82c3_4387_8f8d_bf1ac731426c.slice/crio-438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170 WatchSource:0}: Error finding container 438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170: Status 404 returned error can't find the container with id 438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170 Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758391 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758495 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758512 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758529 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758562 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758571 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67dedfb-accc-467d-a3bb-508eab4f88c8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758581 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5csj\" (UniqueName: \"kubernetes.io/projected/b67dedfb-accc-467d-a3bb-508eab4f88c8-kube-api-access-j5csj\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758591 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.758598 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67dedfb-accc-467d-a3bb-508eab4f88c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859756 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859824 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859853 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.859931 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.860922 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.861339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.861396 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.865636 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.878781 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"controller-manager-856c6cb8d4-c5dg5\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.883739 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.891193 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:49 crc kubenswrapper[4763]: I0131 14:57:49.893921 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bpxtg"] Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.305614 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602235 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerStarted","Data":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602284 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerStarted","Data":"73fa7d5ead40a096c0ba2f6504c9a4404caea7c60cd62d26d61630f96fde5c3e"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.602586 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605155 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerStarted","Data":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605186 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerStarted","Data":"e8430d0dd8e723932922589437322bb5e5478cc04d3605ffd87f3ce8b909e891"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.605580 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.608854 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.609366 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"cb6d2ccafa4331eb8e48c12622c9ab41a74f8718e5b5a0f5ab2e705ff4060b00"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.609396 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-26pm5" event={"ID":"84302428-88e1-47ba-84cc-7d12472f9aa2","Type":"ContainerStarted","Data":"c91c6ce9c4ef12580ce3cd84831fd39147072d13711ed46a5cb8d64a6a605274"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.612099 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.614069 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerStarted","Data":"4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.614091 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerStarted","Data":"438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170"} Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.624514 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" podStartSLOduration=5.624491403 podStartE2EDuration="5.624491403s" podCreationTimestamp="2026-01-31 14:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.617600597 +0000 UTC m=+190.372338880" watchObservedRunningTime="2026-01-31 14:57:50.624491403 +0000 UTC m=+190.379229706" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.650068 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" podStartSLOduration=5.65005091 podStartE2EDuration="5.65005091s" podCreationTimestamp="2026-01-31 14:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.646759288 +0000 UTC m=+190.401497601" watchObservedRunningTime="2026-01-31 14:57:50.65005091 +0000 UTC m=+190.404789203" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.679201 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-26pm5" podStartSLOduration=170.67918005 podStartE2EDuration="2m50.67918005s" podCreationTimestamp="2026-01-31 14:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.678686494 +0000 UTC m=+190.433424777" watchObservedRunningTime="2026-01-31 14:57:50.67918005 +0000 UTC m=+190.433918343" Jan 31 14:57:50 crc kubenswrapper[4763]: I0131 14:57:50.679688 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.679680425 podStartE2EDuration="2.679680425s" podCreationTimestamp="2026-01-31 14:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:50.665129501 +0000 UTC m=+190.419867804" watchObservedRunningTime="2026-01-31 14:57:50.679680425 +0000 UTC m=+190.434418718" Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.050549 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67dedfb-accc-467d-a3bb-508eab4f88c8" path="/var/lib/kubelet/pods/b67dedfb-accc-467d-a3bb-508eab4f88c8/volumes" Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.622502 4763 generic.go:334] "Generic (PLEG): container finished" podID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerID="4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e" exitCode=0 Jan 31 14:57:51 crc kubenswrapper[4763]: I0131 14:57:51.622560 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerDied","Data":"4e5eb8555feeeb7ac64a72a4b6f44a1ed5577133c84b27e11cbefaf9bba1a20e"} Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.112229 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.113852 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.133272 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232708 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232778 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.232798 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334198 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334264 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334339 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.334494 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.357388 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"installer-9-crc\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.438576 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.827177 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948241 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") pod \"985646ce-82c3-4387-8f8d-bf1ac731426c\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948370 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") pod \"985646ce-82c3-4387-8f8d-bf1ac731426c\" (UID: \"985646ce-82c3-4387-8f8d-bf1ac731426c\") " Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.948572 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "985646ce-82c3-4387-8f8d-bf1ac731426c" (UID: "985646ce-82c3-4387-8f8d-bf1ac731426c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:57:55 crc kubenswrapper[4763]: I0131 14:57:55.953239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "985646ce-82c3-4387-8f8d-bf1ac731426c" (UID: "985646ce-82c3-4387-8f8d-bf1ac731426c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.001193 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.049370 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/985646ce-82c3-4387-8f8d-bf1ac731426c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.049415 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/985646ce-82c3-4387-8f8d-bf1ac731426c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:56 crc kubenswrapper[4763]: W0131 14:57:56.408936 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode242425d_c262_45a8_b933_a84abec6740e.slice/crio-dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95 WatchSource:0}: Error finding container dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95: Status 404 returned error can't find the container with id dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95 Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.659839 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"985646ce-82c3-4387-8f8d-bf1ac731426c","Type":"ContainerDied","Data":"438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170"} Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.660167 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438aba147fbe4efa4a5da0545ebc6a5f8f702a5e276652ac1c9ea1976cafb170" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.659951 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:57:56 crc kubenswrapper[4763]: I0131 14:57:56.662637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerStarted","Data":"dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.670571 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" exitCode=0 Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.670640 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.672925 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerStarted","Data":"441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.674777 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" exitCode=0 Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.674812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807"} Jan 31 14:57:57 crc kubenswrapper[4763]: I0131 14:57:57.728781 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.728765936 podStartE2EDuration="2.728765936s" podCreationTimestamp="2026-01-31 14:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:57.726461731 +0000 UTC m=+197.481200024" watchObservedRunningTime="2026-01-31 14:57:57.728765936 +0000 UTC m=+197.483504229" Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.682668 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerStarted","Data":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.685485 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerStarted","Data":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.727431 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4bfmh" podStartSLOduration=8.83060615 podStartE2EDuration="44.727405069s" podCreationTimestamp="2026-01-31 14:57:14 +0000 UTC" firstStartedPulling="2026-01-31 14:57:22.244111688 +0000 UTC m=+161.998849971" lastFinishedPulling="2026-01-31 14:57:58.140910597 +0000 UTC m=+197.895648890" observedRunningTime="2026-01-31 14:57:58.705428271 +0000 UTC m=+198.460166584" watchObservedRunningTime="2026-01-31 14:57:58.727405069 +0000 UTC m=+198.482143382" Jan 31 14:57:58 crc kubenswrapper[4763]: I0131 14:57:58.729385 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wxzg8" podStartSLOduration=8.906854941 podStartE2EDuration="44.729372974s" podCreationTimestamp="2026-01-31 14:57:14 +0000 UTC" firstStartedPulling="2026-01-31 14:57:22.243419957 +0000 UTC m=+161.998158240" lastFinishedPulling="2026-01-31 14:57:58.06593797 +0000 UTC m=+197.820676273" observedRunningTime="2026-01-31 14:57:58.724647001 +0000 UTC m=+198.479385324" watchObservedRunningTime="2026-01-31 14:57:58.729372974 +0000 UTC m=+198.484111317" Jan 31 14:58:00 crc kubenswrapper[4763]: I0131 14:58:00.877929 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:02 crc kubenswrapper[4763]: I0131 14:58:02.718634 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" exitCode=0 Jan 31 14:58:02 crc kubenswrapper[4763]: I0131 14:58:02.721863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.498044 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.498535 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.735026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.736732 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b" exitCode=0 Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.736799 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.738585 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerStarted","Data":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.740489 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" exitCode=0 Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.740527 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144"} Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.808685 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9df4p" podStartSLOduration=1.9307628879999998 podStartE2EDuration="53.808671472s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.266875242 +0000 UTC m=+152.021613535" lastFinishedPulling="2026-01-31 14:58:04.144783816 +0000 UTC m=+203.899522119" observedRunningTime="2026-01-31 14:58:04.78939888 +0000 UTC m=+204.544137173" watchObservedRunningTime="2026-01-31 14:58:04.808671472 +0000 UTC m=+204.563409765" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.905502 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:04 crc kubenswrapper[4763]: I0131 14:58:04.905570 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.747491 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" exitCode=0 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.747566 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.751973 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerStarted","Data":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.754978 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" exitCode=0 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.755026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.777164 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.777814 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" containerID="cri-o://e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" gracePeriod=30 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.795228 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wxzg8" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" probeResult="failure" output=< Jan 31 14:58:05 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 14:58:05 crc kubenswrapper[4763]: > Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.823787 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.823993 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" containerID="cri-o://d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" gracePeriod=30 Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.845254 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-22bkm" podStartSLOduration=2.626649923 podStartE2EDuration="52.845237561s" podCreationTimestamp="2026-01-31 14:57:13 +0000 UTC" firstStartedPulling="2026-01-31 14:57:15.31944481 +0000 UTC m=+155.074183093" lastFinishedPulling="2026-01-31 14:58:05.538032438 +0000 UTC m=+205.292770731" observedRunningTime="2026-01-31 14:58:05.843708648 +0000 UTC m=+205.598446941" watchObservedRunningTime="2026-01-31 14:58:05.845237561 +0000 UTC m=+205.599975854" Jan 31 14:58:05 crc kubenswrapper[4763]: I0131 14:58:05.960925 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4bfmh" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" probeResult="failure" output=< Jan 31 14:58:05 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 14:58:05 crc kubenswrapper[4763]: > Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.521225 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.610624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677879 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677950 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.677986 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678090 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") pod \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\" (UID: \"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678834 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.678847 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config" (OuterVolumeSpecName: "config") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.685008 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55" (OuterVolumeSpecName: "kube-api-access-nrm55") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "kube-api-access-nrm55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.697847 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" (UID: "fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761328 4763 generic.go:334] "Generic (PLEG): container finished" podID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerDied","Data":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761514 4763 scope.go:117] "RemoveContainer" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.761657 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" event={"ID":"fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27","Type":"ContainerDied","Data":"73fa7d5ead40a096c0ba2f6504c9a4404caea7c60cd62d26d61630f96fde5c3e"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.762683 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.763359 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerStarted","Data":"ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.765343 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.765392 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767613 4763 generic.go:334] "Generic (PLEG): container finished" podID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" exitCode=0 Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767634 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerDied","Data":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" event={"ID":"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8","Type":"ContainerDied","Data":"e8430d0dd8e723932922589437322bb5e5478cc04d3605ffd87f3ce8b909e891"} Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.767716 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.778934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.778984 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779017 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779046 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779125 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") pod \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\" (UID: \"d5ce857d-a7e2-4d48-8e82-20f4db8e28c8\") " Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779351 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779366 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779375 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm55\" (UniqueName: \"kubernetes.io/projected/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-kube-api-access-nrm55\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779384 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779758 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.779867 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config" (OuterVolumeSpecName: "config") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.780549 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.783067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7" (OuterVolumeSpecName: "kube-api-access-5sxs7") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "kube-api-access-5sxs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.792963 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" (UID: "d5ce857d-a7e2-4d48-8e82-20f4db8e28c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.803251 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mr7l4" podStartSLOduration=3.431062845 podStartE2EDuration="55.803228542s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:13.281225983 +0000 UTC m=+153.035964276" lastFinishedPulling="2026-01-31 14:58:05.65339168 +0000 UTC m=+205.408129973" observedRunningTime="2026-01-31 14:58:06.801187944 +0000 UTC m=+206.555926237" watchObservedRunningTime="2026-01-31 14:58:06.803228542 +0000 UTC m=+206.557966845" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.835081 4763 scope.go:117] "RemoveContainer" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.840626 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:06 crc kubenswrapper[4763]: E0131 14:58:06.841970 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": container with ID starting with d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451 not found: ID does not exist" containerID="d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.842006 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451"} err="failed to get container status \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": rpc error: code = NotFound desc = could not find container \"d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451\": container with ID starting with d9c7bc88fb3b8b64c3cde989bbf4101e673826378adc8f418f2c372b7ca2d451 not found: ID does not exist" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.842069 4763 scope.go:117] "RemoveContainer" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.845572 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d78d477f7-cmh4k"] Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.865930 4763 scope.go:117] "RemoveContainer" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: E0131 14:58:06.866399 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": container with ID starting with e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9 not found: ID does not exist" containerID="e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.866446 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9"} err="failed to get container status \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": rpc error: code = NotFound desc = could not find container \"e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9\": container with ID starting with e170936a60a2728421300c994629758ecb1f18fd2343bfef6ccce6f63a3ad4b9 not found: ID does not exist" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880215 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880245 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880256 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880264 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sxs7\" (UniqueName: \"kubernetes.io/projected/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-kube-api-access-5sxs7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:06 crc kubenswrapper[4763]: I0131 14:58:06.880273 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.048651 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" path="/var/lib/kubelet/pods/fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27/volumes" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.083053 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.085653 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-856c6cb8d4-c5dg5"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232481 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232789 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232809 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232832 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232841 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: E0131 14:58:07.232855 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232863 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.232995 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8eb1bf-99e7-41e8-b2ab-7bf622a12e27" containerName="route-controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233008 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" containerName="controller-manager" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233019 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="985646ce-82c3-4387-8f8d-bf1ac731426c" containerName="pruner" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.233484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.235510 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236298 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236488 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.236827 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237088 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237178 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.237339 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.239307 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240292 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240397 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240764 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.240964 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.241109 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.244809 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.245784 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.247765 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.319672 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385240 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385271 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385292 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385317 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385375 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385420 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.385446 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.486494 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490797 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490926 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.490958 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491086 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491110 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491192 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.491226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492010 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.492819 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.493092 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.496598 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.498511 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.498521 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.516200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"controller-manager-76646487c9-96gk6\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.517864 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"route-controller-manager-768768945b-n8gjd\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.614819 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.629285 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.776792 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerStarted","Data":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} Jan 31 14:58:07 crc kubenswrapper[4763]: I0131 14:58:07.794163 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k6ddv" podStartSLOduration=3.699863981 podStartE2EDuration="57.794149188s" podCreationTimestamp="2026-01-31 14:57:10 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.244546875 +0000 UTC m=+151.999285168" lastFinishedPulling="2026-01-31 14:58:06.338832082 +0000 UTC m=+206.093570375" observedRunningTime="2026-01-31 14:58:07.792962555 +0000 UTC m=+207.547700848" watchObservedRunningTime="2026-01-31 14:58:07.794149188 +0000 UTC m=+207.548887481" Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.057285 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ce857d-a7e2-4d48-8e82-20f4db8e28c8" path="/var/lib/kubelet/pods/d5ce857d-a7e2-4d48-8e82-20f4db8e28c8/volumes" Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.653018 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.656376 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:09 crc kubenswrapper[4763]: W0131 14:58:09.661904 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1904fac_b0bf_46bf_b137_8cae4630dc39.slice/crio-6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01 WatchSource:0}: Error finding container 6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01: Status 404 returned error can't find the container with id 6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01 Jan 31 14:58:09 crc kubenswrapper[4763]: W0131 14:58:09.663676 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3107af8a_7bbd_4214_84e6_a3a18e79510f.slice/crio-250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45 WatchSource:0}: Error finding container 250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45: Status 404 returned error can't find the container with id 250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45 Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.789958 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerStarted","Data":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.790952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerStarted","Data":"250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.793260 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerStarted","Data":"6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01"} Jan 31 14:58:09 crc kubenswrapper[4763]: I0131 14:58:09.806774 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wnmmq" podStartSLOduration=1.993399132 podStartE2EDuration="56.806756286s" podCreationTimestamp="2026-01-31 14:57:13 +0000 UTC" firstStartedPulling="2026-01-31 14:57:14.292211937 +0000 UTC m=+154.046950240" lastFinishedPulling="2026-01-31 14:58:09.105569091 +0000 UTC m=+208.860307394" observedRunningTime="2026-01-31 14:58:09.803282288 +0000 UTC m=+209.558020581" watchObservedRunningTime="2026-01-31 14:58:09.806756286 +0000 UTC m=+209.561494579" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.324031 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.324429 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.420545 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.684296 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.684347 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.741106 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.847607 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.849047 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9df4p" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.909712 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.910066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:11 crc kubenswrapper[4763]: I0131 14:58:11.945915 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.822580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerStarted","Data":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.823802 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.825929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerStarted","Data":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.827175 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerStarted","Data":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.837943 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.846545 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" podStartSLOduration=7.846527188 podStartE2EDuration="7.846527188s" podCreationTimestamp="2026-01-31 14:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:12.842970578 +0000 UTC m=+212.597708861" watchObservedRunningTime="2026-01-31 14:58:12.846527188 +0000 UTC m=+212.601265481" Jan 31 14:58:12 crc kubenswrapper[4763]: I0131 14:58:12.908395 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.516967 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.517043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.565000 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.832872 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.840458 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.853981 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" podStartSLOduration=8.853964658 podStartE2EDuration="8.853964658s" podCreationTimestamp="2026-01-31 14:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:13.852128437 +0000 UTC m=+213.606866720" watchObservedRunningTime="2026-01-31 14:58:13.853964658 +0000 UTC m=+213.608702951" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.890723 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4m6qg" podStartSLOduration=4.091857244 podStartE2EDuration="1m2.890708351s" podCreationTimestamp="2026-01-31 14:57:11 +0000 UTC" firstStartedPulling="2026-01-31 14:57:12.249608883 +0000 UTC m=+152.004347176" lastFinishedPulling="2026-01-31 14:58:11.04845999 +0000 UTC m=+210.803198283" observedRunningTime="2026-01-31 14:58:13.887288735 +0000 UTC m=+213.642027068" watchObservedRunningTime="2026-01-31 14:58:13.890708351 +0000 UTC m=+213.645446634" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.904216 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.904283 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:13 crc kubenswrapper[4763]: I0131 14:58:13.950780 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.176958 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177018 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177063 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177562 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.177616 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" gracePeriod=600 Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.305833 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.559049 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.622856 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.854036 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" exitCode=0 Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.854561 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb"} Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.930649 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:14 crc kubenswrapper[4763]: I0131 14:58:14.977764 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:15 crc kubenswrapper[4763]: I0131 14:58:15.020150 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:15 crc kubenswrapper[4763]: I0131 14:58:15.860949 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mr7l4" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" containerID="cri-o://ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" gracePeriod=2 Jan 31 14:58:16 crc kubenswrapper[4763]: I0131 14:58:16.716899 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:16 crc kubenswrapper[4763]: I0131 14:58:16.867192 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-22bkm" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" containerID="cri-o://3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" gracePeriod=2 Jan 31 14:58:17 crc kubenswrapper[4763]: I0131 14:58:17.873178 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} Jan 31 14:58:18 crc kubenswrapper[4763]: I0131 14:58:18.884031 4763 generic.go:334] "Generic (PLEG): container finished" podID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerID="ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" exitCode=0 Jan 31 14:58:18 crc kubenswrapper[4763]: I0131 14:58:18.884095 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.107210 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.107632 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4bfmh" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" containerID="cri-o://fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" gracePeriod=2 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.447249 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.561088 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.562355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.562387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") pod \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\" (UID: \"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.563565 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities" (OuterVolumeSpecName: "utilities") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.567684 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh" (OuterVolumeSpecName: "kube-api-access-m4rwh") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "kube-api-access-m4rwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.612542 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.626346 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" (UID: "f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.654624 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663668 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4rwh\" (UniqueName: \"kubernetes.io/projected/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-kube-api-access-m4rwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663715 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.663726 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765032 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765203 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765229 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") pod \"463b0d45-1b3b-46a1-afbd-650fa065b38f\" (UID: \"463b0d45-1b3b-46a1-afbd-650fa065b38f\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.765290 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") pod \"1e24d612-62ed-4bd5-8e07-889710d16851\" (UID: \"1e24d612-62ed-4bd5-8e07-889710d16851\") " Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.766570 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities" (OuterVolumeSpecName: "utilities") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770238 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities" (OuterVolumeSpecName: "utilities") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770253 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps" (OuterVolumeSpecName: "kube-api-access-g9fps") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "kube-api-access-g9fps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.770390 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn" (OuterVolumeSpecName: "kube-api-access-dshgn") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "kube-api-access-dshgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.782833 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.782945 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dshgn\" (UniqueName: \"kubernetes.io/projected/1e24d612-62ed-4bd5-8e07-889710d16851-kube-api-access-dshgn\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.783006 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fps\" (UniqueName: \"kubernetes.io/projected/463b0d45-1b3b-46a1-afbd-650fa065b38f-kube-api-access-g9fps\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.783060 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.787427 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e24d612-62ed-4bd5-8e07-889710d16851" (UID: "1e24d612-62ed-4bd5-8e07-889710d16851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.884418 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e24d612-62ed-4bd5-8e07-889710d16851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896377 4763 generic.go:334] "Generic (PLEG): container finished" podID="1e24d612-62ed-4bd5-8e07-889710d16851" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" exitCode=0 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896473 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22bkm" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896496 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896570 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22bkm" event={"ID":"1e24d612-62ed-4bd5-8e07-889710d16851","Type":"ContainerDied","Data":"c82ecb291538ae24976b93059fe5012e9eba52e1be048225cfff7c40417c81ce"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.896602 4763 scope.go:117] "RemoveContainer" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.900996 4763 generic.go:334] "Generic (PLEG): container finished" podID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" exitCode=0 Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901093 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4bfmh" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901108 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.901756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4bfmh" event={"ID":"463b0d45-1b3b-46a1-afbd-650fa065b38f","Type":"ContainerDied","Data":"64d775f9e024e4389d9fed6e15e2915fc320014ae014969ebec947a0370a2ca9"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.907024 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mr7l4" event={"ID":"f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa","Type":"ContainerDied","Data":"29747698a85a5ba3ecc912242e839ad570821011bfaa89976cf716705b0e4372"} Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.907140 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mr7l4" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.921443 4763 scope.go:117] "RemoveContainer" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.952381 4763 scope.go:117] "RemoveContainer" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.955759 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.975772 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-22bkm"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.978055 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.981761 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mr7l4"] Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982167 4763 scope.go:117] "RemoveContainer" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.982823 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": container with ID starting with 3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a not found: ID does not exist" containerID="3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982864 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a"} err="failed to get container status \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": rpc error: code = NotFound desc = could not find container \"3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a\": container with ID starting with 3c317f540f94ba82b4fdfb126179450e1b652095fe1a8b866dce98145d81447a not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.982896 4763 scope.go:117] "RemoveContainer" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.983414 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": container with ID starting with 6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144 not found: ID does not exist" containerID="6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.983480 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144"} err="failed to get container status \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": rpc error: code = NotFound desc = could not find container \"6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144\": container with ID starting with 6feb39515c46869c42bdad8c345d8ddfbd38d5e1b5e876fafd4360ec653e9144 not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.983525 4763 scope.go:117] "RemoveContainer" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: E0131 14:58:19.984116 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": container with ID starting with b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f not found: ID does not exist" containerID="b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.984166 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f"} err="failed to get container status \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": rpc error: code = NotFound desc = could not find container \"b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f\": container with ID starting with b5fcefa9833b3ce59a8e38ebaf0025b6e3f0e84dd47d373130e3a6568a1fb10f not found: ID does not exist" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.984207 4763 scope.go:117] "RemoveContainer" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:19 crc kubenswrapper[4763]: I0131 14:58:19.999395 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "463b0d45-1b3b-46a1-afbd-650fa065b38f" (UID: "463b0d45-1b3b-46a1-afbd-650fa065b38f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.009949 4763 scope.go:117] "RemoveContainer" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.033432 4763 scope.go:117] "RemoveContainer" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061039 4763 scope.go:117] "RemoveContainer" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.061630 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": container with ID starting with fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca not found: ID does not exist" containerID="fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061691 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca"} err="failed to get container status \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": rpc error: code = NotFound desc = could not find container \"fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca\": container with ID starting with fe04bad61c60f03a4fee1545171714c42e40b646ab8446c28838b4204d69daca not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.061853 4763 scope.go:117] "RemoveContainer" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.062315 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": container with ID starting with 92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081 not found: ID does not exist" containerID="92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062382 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081"} err="failed to get container status \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": rpc error: code = NotFound desc = could not find container \"92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081\": container with ID starting with 92b8c543220171f2670cb4b845ea34abe1120ccb97fa02fed33898d7f0ec2081 not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062430 4763 scope.go:117] "RemoveContainer" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: E0131 14:58:20.062886 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": container with ID starting with af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e not found: ID does not exist" containerID="af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062916 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e"} err="failed to get container status \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": rpc error: code = NotFound desc = could not find container \"af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e\": container with ID starting with af0144e6d97f243992322cb824634e0232c08a887c39ce09faf5551f212c4c7e not found: ID does not exist" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.062936 4763 scope.go:117] "RemoveContainer" containerID="ff231089d225f30c4966262b3b6adccb590c1423c4741d70fcc0d75c37452971" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.081419 4763 scope.go:117] "RemoveContainer" containerID="685557cff2acc69b1516ebfc5ddcd84394226cdd6a924fb7fd37630cb88d114b" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.086959 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/463b0d45-1b3b-46a1-afbd-650fa065b38f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.107187 4763 scope.go:117] "RemoveContainer" containerID="cbcf059643f243b97663d9030a999deafef368163de2196a0d497c8e7eabbc09" Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.247897 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:20 crc kubenswrapper[4763]: I0131 14:58:20.256314 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4bfmh"] Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.049165 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" path="/var/lib/kubelet/pods/1e24d612-62ed-4bd5-8e07-889710d16851/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.049897 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" path="/var/lib/kubelet/pods/463b0d45-1b3b-46a1-afbd-650fa065b38f/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.050590 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" path="/var/lib/kubelet/pods/f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa/volumes" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.714447 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.714504 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.783776 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:21 crc kubenswrapper[4763]: I0131 14:58:21.987962 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:23 crc kubenswrapper[4763]: I0131 14:58:23.592151 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.778430 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.778629 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" containerID="cri-o://7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" gracePeriod=30 Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.875953 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.876221 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" containerID="cri-o://7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" gracePeriod=30 Jan 31 14:58:25 crc kubenswrapper[4763]: I0131 14:58:25.906911 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" containerID="cri-o://003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" gracePeriod=15 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.109486 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.109747 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4m6qg" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" containerID="cri-o://743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" gracePeriod=2 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.350786 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379314 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379358 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.379381 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") pod \"3107af8a-7bbd-4214-84e6-a3a18e79510f\" (UID: \"3107af8a-7bbd-4214-84e6-a3a18e79510f\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.380675 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca" (OuterVolumeSpecName: "client-ca") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.380763 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config" (OuterVolumeSpecName: "config") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.388067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7" (OuterVolumeSpecName: "kube-api-access-d77v7") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "kube-api-access-d77v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.393070 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3107af8a-7bbd-4214-84e6-a3a18e79510f" (UID: "3107af8a-7bbd-4214-84e6-a3a18e79510f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480232 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d77v7\" (UniqueName: \"kubernetes.io/projected/3107af8a-7bbd-4214-84e6-a3a18e79510f-kube-api-access-d77v7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480262 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480274 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3107af8a-7bbd-4214-84e6-a3a18e79510f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.480282 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3107af8a-7bbd-4214-84e6-a3a18e79510f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.481834 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.544299 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.586982 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587029 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587092 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587124 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587156 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587185 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587201 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587223 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587253 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587321 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587340 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") pod \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\" (UID: \"5a85c02e-9d6e-4d11-be81-242bf4fee8c4\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587369 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.587406 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") pod \"275ea46d-7a78-4457-a5ba-7b3000170d0e\" (UID: \"275ea46d-7a78-4457-a5ba-7b3000170d0e\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.589203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.589585 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.595030 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.596333 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities" (OuterVolumeSpecName: "utilities") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.596421 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.597908 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598494 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.598858 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.599217 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.610068 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.612909 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf" (OuterVolumeSpecName: "kube-api-access-sq9nf") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "kube-api-access-sq9nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616104 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616329 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p" (OuterVolumeSpecName: "kube-api-access-llk9p") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "kube-api-access-llk9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616466 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.616483 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "275ea46d-7a78-4457-a5ba-7b3000170d0e" (UID: "275ea46d-7a78-4457-a5ba-7b3000170d0e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.673839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a85c02e-9d6e-4d11-be81-242bf4fee8c4" (UID: "5a85c02e-9d6e-4d11-be81-242bf4fee8c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688439 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq9nf\" (UniqueName: \"kubernetes.io/projected/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-kube-api-access-sq9nf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688643 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688778 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688863 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.688956 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689036 4763 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689122 4763 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/275ea46d-7a78-4457-a5ba-7b3000170d0e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689211 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689289 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689366 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689442 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689530 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a85c02e-9d6e-4d11-be81-242bf4fee8c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689610 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689719 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llk9p\" (UniqueName: \"kubernetes.io/projected/275ea46d-7a78-4457-a5ba-7b3000170d0e-kube-api-access-llk9p\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689832 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.689922 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.690003 4763 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/275ea46d-7a78-4457-a5ba-7b3000170d0e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.803666 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892213 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892725 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892818 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.892929 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893032 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") pod \"f1904fac-b0bf-46bf-b137-8cae4630dc39\" (UID: \"f1904fac-b0bf-46bf-b137-8cae4630dc39\") " Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893317 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893448 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config" (OuterVolumeSpecName: "config") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.893782 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.896725 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7" (OuterVolumeSpecName: "kube-api-access-sp7h7") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "kube-api-access-sp7h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.900839 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1904fac-b0bf-46bf-b137-8cae4630dc39" (UID: "f1904fac-b0bf-46bf-b137-8cae4630dc39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.963901 4763 generic.go:334] "Generic (PLEG): container finished" podID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.963995 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964004 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerDied","Data":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd" event={"ID":"3107af8a-7bbd-4214-84e6-a3a18e79510f","Type":"ContainerDied","Data":"250afbceb4491af0b71e1680cfee8a18ebd2db8880f0ea2683b0332562116b45"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.964121 4763 scope.go:117] "RemoveContainer" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966118 4763 generic.go:334] "Generic (PLEG): container finished" podID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966167 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerDied","Data":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966190 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" event={"ID":"f1904fac-b0bf-46bf-b137-8cae4630dc39","Type":"ContainerDied","Data":"6d86f7ce32eb06c34ecceb3c5a5ea319381fd9969dad831555e155e547b2cd01"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.966232 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76646487c9-96gk6" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970255 4763 generic.go:334] "Generic (PLEG): container finished" podID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerDied","Data":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970360 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" event={"ID":"275ea46d-7a78-4457-a5ba-7b3000170d0e","Type":"ContainerDied","Data":"4c129b0efc8ba7e81700bd2c14706e7d50e9bb2c2ff1bf01d5aadf4cd55835b7"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.970480 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8pcvn" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974063 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" exitCode=0 Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974098 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974122 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4m6qg" event={"ID":"5a85c02e-9d6e-4d11-be81-242bf4fee8c4","Type":"ContainerDied","Data":"9c7dca8f63ce2a8f4eb26e014c54162f55c4578fef6f425b844a6c85dc4561db"} Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.974203 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4m6qg" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.979711 4763 scope.go:117] "RemoveContainer" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: E0131 14:58:26.983187 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": container with ID starting with 7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb not found: ID does not exist" containerID="7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.983243 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb"} err="failed to get container status \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": rpc error: code = NotFound desc = could not find container \"7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb\": container with ID starting with 7be9305ee4de774cd6027ea43532a5c3b6b920732b57a3a527ee8c81f8260bbb not found: ID does not exist" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.983278 4763 scope.go:117] "RemoveContainer" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995458 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp7h7\" (UniqueName: \"kubernetes.io/projected/f1904fac-b0bf-46bf-b137-8cae4630dc39-kube-api-access-sp7h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995575 4763 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995585 4763 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1904fac-b0bf-46bf-b137-8cae4630dc39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995596 4763 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:26 crc kubenswrapper[4763]: I0131 14:58:26.995604 4763 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1904fac-b0bf-46bf-b137-8cae4630dc39-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.001817 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.006165 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-768768945b-n8gjd"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.007867 4763 scope.go:117] "RemoveContainer" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.008494 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": container with ID starting with 7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e not found: ID does not exist" containerID="7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.008526 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e"} err="failed to get container status \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": rpc error: code = NotFound desc = could not find container \"7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e\": container with ID starting with 7d5409a4eb9aeba180cdcea87c6fa8d711fedc996ee925bdc8fcb33bd1d7680e not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.008583 4763 scope.go:117] "RemoveContainer" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.019509 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.019546 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8pcvn"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023122 4763 scope.go:117] "RemoveContainer" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.023519 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": container with ID starting with 003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de not found: ID does not exist" containerID="003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023562 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de"} err="failed to get container status \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": rpc error: code = NotFound desc = could not find container \"003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de\": container with ID starting with 003c08e2c29d3c5b59c6a1fd94bb61e81bf58c9d936a5ff50b3a74fa00ee51de not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.023584 4763 scope.go:117] "RemoveContainer" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.026362 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.028733 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76646487c9-96gk6"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.034535 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.035713 4763 scope.go:117] "RemoveContainer" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.036992 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4m6qg"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.048208 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" path="/var/lib/kubelet/pods/275ea46d-7a78-4457-a5ba-7b3000170d0e/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.048727 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" path="/var/lib/kubelet/pods/3107af8a-7bbd-4214-84e6-a3a18e79510f/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.049195 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" path="/var/lib/kubelet/pods/5a85c02e-9d6e-4d11-be81-242bf4fee8c4/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.050265 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" path="/var/lib/kubelet/pods/f1904fac-b0bf-46bf-b137-8cae4630dc39/volumes" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.052870 4763 scope.go:117] "RemoveContainer" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.064666 4763 scope.go:117] "RemoveContainer" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.065677 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": container with ID starting with 743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a not found: ID does not exist" containerID="743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.065716 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a"} err="failed to get container status \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": rpc error: code = NotFound desc = could not find container \"743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a\": container with ID starting with 743edce5c6a64dd51dde28d7391bbad2b836f23024b81595df21ceb2f4bc226a not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.065738 4763 scope.go:117] "RemoveContainer" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.066028 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": container with ID starting with 52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9 not found: ID does not exist" containerID="52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066062 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9"} err="failed to get container status \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": rpc error: code = NotFound desc = could not find container \"52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9\": container with ID starting with 52a48be71e213b158459bde46ac394986ffd4e69dafbdbf7529020077d9cafb9 not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066085 4763 scope.go:117] "RemoveContainer" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.066325 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": container with ID starting with 020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba not found: ID does not exist" containerID="020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.066345 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba"} err="failed to get container status \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": rpc error: code = NotFound desc = could not find container \"020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba\": container with ID starting with 020b220c679a66fe9c3c2990addb68ddfb9cb55888df25b0768fff8146c738ba not found: ID does not exist" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.257905 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258208 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258226 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258242 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258249 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258260 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258271 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258284 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258291 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258301 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258307 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258317 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258337 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258345 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258355 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258363 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258374 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258381 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258391 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258414 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258422 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258430 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258437 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258452 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258459 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="extract-utilities" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258468 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258476 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: E0131 14:58:27.258484 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258510 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="extract-content" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258626 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85c02e-9d6e-4d11-be81-242bf4fee8c4" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258640 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="463b0d45-1b3b-46a1-afbd-650fa065b38f" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258725 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="275ea46d-7a78-4457-a5ba-7b3000170d0e" containerName="oauth-openshift" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258738 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="3107af8a-7bbd-4214-84e6-a3a18e79510f" containerName="route-controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258747 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1904fac-b0bf-46bf-b137-8cae4630dc39" containerName="controller-manager" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258756 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a48e2e-604d-4cd9-b0c8-a290f4a81ffa" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.258767 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e24d612-62ed-4bd5-8e07-889710d16851" containerName="registry-server" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.259214 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.262439 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265585 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265634 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.265831 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.266291 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.266307 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.271509 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.272579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.276254 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277040 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277479 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277871 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.277999 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.278174 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.289346 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302561 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302808 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.302976 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.309887 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.310964 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.311128 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.315411 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413644 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413843 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.413928 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414034 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414113 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414205 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414279 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414342 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.414380 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.417377 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-proxy-ca-bundles\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.418742 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-config\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.419410 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380e01ec-794a-4356-995c-ef1113b1b126-client-ca\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.422785 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380e01ec-794a-4356-995c-ef1113b1b126-serving-cert\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.447201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbp5b\" (UniqueName: \"kubernetes.io/projected/380e01ec-794a-4356-995c-ef1113b1b126-kube-api-access-sbp5b\") pod \"controller-manager-79887f45c6-xhvgp\" (UID: \"380e01ec-794a-4356-995c-ef1113b1b126\") " pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516181 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516288 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516327 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.516425 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.517439 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-config\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.517608 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45e1613c-b869-484e-a000-da4460283966-client-ca\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.523331 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45e1613c-b869-484e-a000-da4460283966-serving-cert\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.545307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2q2\" (UniqueName: \"kubernetes.io/projected/45e1613c-b869-484e-a000-da4460283966-kube-api-access-2p2q2\") pod \"route-controller-manager-8d744b964-dczt5\" (UID: \"45e1613c-b869-484e-a000-da4460283966\") " pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.616544 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:27 crc kubenswrapper[4763]: I0131 14:58:27.633347 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.062618 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.104156 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79887f45c6-xhvgp"] Jan 31 14:58:28 crc kubenswrapper[4763]: W0131 14:58:28.116738 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380e01ec_794a_4356_995c_ef1113b1b126.slice/crio-d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f WatchSource:0}: Error finding container d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f: Status 404 returned error can't find the container with id d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.249480 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.250123 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252508 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252684 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.252954 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253079 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253178 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253344 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253463 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253569 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253754 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.253921 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.258078 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.258539 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.261301 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.263734 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.269103 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.273064 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.325875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326224 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326271 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326526 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326572 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326606 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326624 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326672 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326725 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326897 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.326975 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.327008 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.327033 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428131 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428208 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428227 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428253 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428280 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428318 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428360 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428410 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428427 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.428446 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429250 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-audit-policies\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429305 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05a2994-becc-48cf-baf3-a17f479924ba-audit-dir\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429583 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.429931 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-service-ca\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438313 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438526 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.438739 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-error\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439080 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-user-template-login\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439309 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-router-certs\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.439566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.440280 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-session\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.440773 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05a2994-becc-48cf-baf3-a17f479924ba-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.447997 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xmf\" (UniqueName: \"kubernetes.io/projected/d05a2994-becc-48cf-baf3-a17f479924ba-kube-api-access-k7xmf\") pod \"oauth-openshift-65ff5df46b-wgctn\" (UID: \"d05a2994-becc-48cf-baf3-a17f479924ba\") " pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.600948 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.992740 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" event={"ID":"380e01ec-794a-4356-995c-ef1113b1b126","Type":"ContainerStarted","Data":"47ae2119b32406b5a4bc9ed6f6384c6eb429759a031f29bef87cdc7554dc4ff3"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.993129 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.993149 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" event={"ID":"380e01ec-794a-4356-995c-ef1113b1b126","Type":"ContainerStarted","Data":"d08f359c5b02f449c344da21761eb32c1b0b871bc4c01ea0f73859b2fbd0742f"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994369 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" event={"ID":"45e1613c-b869-484e-a000-da4460283966","Type":"ContainerStarted","Data":"372ab3b230cad3e5b4feed5027ebd56e62177302f1d6d673c6694d49f38b2ab1"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" event={"ID":"45e1613c-b869-484e-a000-da4460283966","Type":"ContainerStarted","Data":"4b774e08c3554b08f112b7a609047ea5b045fe00b717ed824cfcaeffd79d2bf1"} Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.994847 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:28 crc kubenswrapper[4763]: I0131 14:58:28.998759 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.001004 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.017836 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79887f45c6-xhvgp" podStartSLOduration=4.017817075 podStartE2EDuration="4.017817075s" podCreationTimestamp="2026-01-31 14:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:29.015453428 +0000 UTC m=+228.770191751" watchObservedRunningTime="2026-01-31 14:58:29.017817075 +0000 UTC m=+228.772555368" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.057348 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8d744b964-dczt5" podStartSLOduration=4.057330435 podStartE2EDuration="4.057330435s" podCreationTimestamp="2026-01-31 14:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:29.054233258 +0000 UTC m=+228.808971571" watchObservedRunningTime="2026-01-31 14:58:29.057330435 +0000 UTC m=+228.812068728" Jan 31 14:58:29 crc kubenswrapper[4763]: I0131 14:58:29.058862 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65ff5df46b-wgctn"] Jan 31 14:58:30 crc kubenswrapper[4763]: I0131 14:58:30.002282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" event={"ID":"d05a2994-becc-48cf-baf3-a17f479924ba","Type":"ContainerStarted","Data":"2209620677ee635e4efed085b81c777d11d8d97617cfed685f97d1355a8bb630"} Jan 31 14:58:30 crc kubenswrapper[4763]: I0131 14:58:30.002361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" event={"ID":"d05a2994-becc-48cf-baf3-a17f479924ba","Type":"ContainerStarted","Data":"92471a19942769780ce31578273efb7d154e1e41a1156d1e8c2cd3db8e6802a7"} Jan 31 14:58:31 crc kubenswrapper[4763]: I0131 14:58:31.037202 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" podStartSLOduration=31.037179432 podStartE2EDuration="31.037179432s" podCreationTimestamp="2026-01-31 14:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:31.036002258 +0000 UTC m=+230.790740551" watchObservedRunningTime="2026-01-31 14:58:31.037179432 +0000 UTC m=+230.791917735" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.457806 4763 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458810 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458855 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458954 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458900 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.458931 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" gracePeriod=15 Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461719 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461941 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461959 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461974 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.461981 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.461993 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462000 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462009 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462016 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462029 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462036 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.462048 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462054 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462181 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462195 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462206 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462218 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.462228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.463610 4763 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.464564 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.470892 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.511757 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606719 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606762 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606804 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606864 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.606911 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.607005 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707964 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.707993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708010 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708046 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708081 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708105 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708082 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708163 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708175 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708160 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708218 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708194 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.708038 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: I0131 14:58:34.806486 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:58:34 crc kubenswrapper[4763]: W0131 14:58:34.835735 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56 WatchSource:0}: Error finding container f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56: Status 404 returned error can't find the container with id f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56 Jan 31 14:58:34 crc kubenswrapper[4763]: E0131 14:58:34.840097 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.045447 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047282 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047332 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047348 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.047361 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" exitCode=2 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053016 4763 generic.go:334] "Generic (PLEG): container finished" podID="e242425d-c262-45a8-b933-a84abec6740e" containerID="441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e" exitCode=0 Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053295 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f3f0047e9cf6cf792d1c255b8b4cbb944c0ff4f1a8d24ed5857f9a3c030dac56"} Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.053361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerDied","Data":"441b596e02022ae8016874e04bb2e0f024181a6d82a5065a1b6c5f7422bd492e"} Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.054047 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:35 crc kubenswrapper[4763]: I0131 14:58:35.054654 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.061533 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8"} Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.062348 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.062763 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.517466 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.518565 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.518980 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643092 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643194 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643239 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") pod \"e242425d-c262-45a8-b933-a84abec6740e\" (UID: \"e242425d-c262-45a8-b933-a84abec6740e\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643240 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock" (OuterVolumeSpecName: "var-lock") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643322 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643533 4763 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.643552 4763 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e242425d-c262-45a8-b933-a84abec6740e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.650361 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e242425d-c262-45a8-b933-a84abec6740e" (UID: "e242425d-c262-45a8-b933-a84abec6740e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.744457 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e242425d-c262-45a8-b933-a84abec6740e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.836792 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.837715 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.838260 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.838924 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.839520 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946653 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946760 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946857 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946849 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946899 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.946930 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947421 4763 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947455 4763 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: I0131 14:58:36.947481 4763 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:36 crc kubenswrapper[4763]: E0131 14:58:36.994907 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.053745 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072122 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e242425d-c262-45a8-b933-a84abec6740e","Type":"ContainerDied","Data":"dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95"} Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072161 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcc37e590bdd58dcc1ac284fc11c47be208afeacbdd9d483c5b8907e05e7ba95" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.072193 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.077997 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.078688 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.078901 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080305 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" exitCode=0 Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080449 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.080469 4763 scope.go:117] "RemoveContainer" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.081784 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.082399 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.082928 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.084817 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.085735 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.087598 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.106085 4763 scope.go:117] "RemoveContainer" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.130420 4763 scope.go:117] "RemoveContainer" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.151991 4763 scope.go:117] "RemoveContainer" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.179316 4763 scope.go:117] "RemoveContainer" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.202395 4763 scope.go:117] "RemoveContainer" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229041 4763 scope.go:117] "RemoveContainer" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.229659 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": container with ID starting with b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b not found: ID does not exist" containerID="b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229759 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b"} err="failed to get container status \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": rpc error: code = NotFound desc = could not find container \"b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b\": container with ID starting with b6198d6522ab36c15e0b86f1ff23bf9951923338501cb9962c6466aad76f665b not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.229809 4763 scope.go:117] "RemoveContainer" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.230431 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": container with ID starting with 02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12 not found: ID does not exist" containerID="02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.230496 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12"} err="failed to get container status \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": rpc error: code = NotFound desc = could not find container \"02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12\": container with ID starting with 02e36672c4e294e4d95b25666cb971412c2eb7840041f121bb876c36292dec12 not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.230528 4763 scope.go:117] "RemoveContainer" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.231199 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": container with ID starting with 1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e not found: ID does not exist" containerID="1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231263 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e"} err="failed to get container status \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": rpc error: code = NotFound desc = could not find container \"1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e\": container with ID starting with 1fc6de3fdc9e218e019463b2d0258a84ebf50e13380dde3e3bbf1549a4f84c4e not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231298 4763 scope.go:117] "RemoveContainer" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.231773 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": container with ID starting with 5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b not found: ID does not exist" containerID="5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231832 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b"} err="failed to get container status \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": rpc error: code = NotFound desc = could not find container \"5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b\": container with ID starting with 5924fe4a0feb696700b39bf8f594e0caf433b543a1390fdebd65bd82c9a9d85b not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.231859 4763 scope.go:117] "RemoveContainer" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.232332 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": container with ID starting with ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c not found: ID does not exist" containerID="ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.232391 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c"} err="failed to get container status \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": rpc error: code = NotFound desc = could not find container \"ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c\": container with ID starting with ece1666406fdc3118c7f1c82cb715f2115112de7c650de66314ef5b11337464c not found: ID does not exist" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.232433 4763 scope.go:117] "RemoveContainer" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: E0131 14:58:37.232985 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": container with ID starting with 89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295 not found: ID does not exist" containerID="89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295" Jan 31 14:58:37 crc kubenswrapper[4763]: I0131 14:58:37.233042 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295"} err="failed to get container status \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": rpc error: code = NotFound desc = could not find container \"89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295\": container with ID starting with 89c6e3befb99054c57e9d34cc1580ca9a39cc968592d3b05c04f971de46e9295 not found: ID does not exist" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.602272 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.610887 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.611541 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.611967 4763 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.612325 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:38 crc kubenswrapper[4763]: I0131 14:58:38.612912 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.785267 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.785719 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.786089 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.786558 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.787084 4763 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:40 crc kubenswrapper[4763]: I0131 14:58:40.787121 4763 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.787452 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 31 14:58:40 crc kubenswrapper[4763]: E0131 14:58:40.988283 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.044485 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.045315 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: I0131 14:58:41.045616 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:41 crc kubenswrapper[4763]: E0131 14:58:41.390141 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 31 14:58:42 crc kubenswrapper[4763]: E0131 14:58:42.191796 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 31 14:58:43 crc kubenswrapper[4763]: E0131 14:58:43.792522 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 31 14:58:46 crc kubenswrapper[4763]: E0131 14:58:46.993546 4763 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Jan 31 14:58:46 crc kubenswrapper[4763]: E0131 14:58:46.996152 4763 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd8c3fc054bd2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,LastTimestamp:2026-01-31 14:58:34.83832213 +0000 UTC m=+234.593060463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152351 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152442 4763 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f" exitCode=1 Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.152506 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f"} Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.153433 4763 scope.go:117] "RemoveContainer" containerID="775b1fe383bbd2b27b49c857e911a30ca306ec06b5453161df38d2c1d20e607f" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.153885 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.155751 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.156292 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:49 crc kubenswrapper[4763]: I0131 14:58:49.156646 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.041836 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.043277 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.043809 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.044302 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.044967 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.057179 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.057233 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:50 crc kubenswrapper[4763]: E0131 14:58:50.057827 4763 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.058614 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:50 crc kubenswrapper[4763]: W0131 14:58:50.077067 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f WatchSource:0}: Error finding container a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f: Status 404 returned error can't find the container with id a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.164048 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.164440 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2a870ccff7eb78880185ce2922012957b68a7b3e4028f2b3e48d2dcd1b9f481"} Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.165866 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.166414 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.166977 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.167074 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a02cd5ef656b6f07cc9a09971eea6eda7e1d2eb0ff5fdc2939425ca67aafbd5f"} Jan 31 14:58:50 crc kubenswrapper[4763]: I0131 14:58:50.167621 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.062765 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.063546 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.064185 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.064643 4763 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.065117 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.181766 4763 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="07f6bff26f370a15049c5174828a86d8544bf96781bc64207289748f486b7ac8" exitCode=0 Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.181842 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"07f6bff26f370a15049c5174828a86d8544bf96781bc64207289748f486b7ac8"} Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182486 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182543 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.182820 4763 status_manager.go:851] "Failed to get status for pod" podUID="d05a2994-becc-48cf-baf3-a17f479924ba" pod="openshift-authentication/oauth-openshift-65ff5df46b-wgctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-65ff5df46b-wgctn\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.183284 4763 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: E0131 14:58:51.183330 4763 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.183781 4763 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.184259 4763 status_manager.go:851] "Failed to get status for pod" podUID="e242425d-c262-45a8-b933-a84abec6740e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:51 crc kubenswrapper[4763]: I0131 14:58:51.184930 4763 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.192736 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"87dcd09f64d885c5b934a05e71ca5aa27fa58062ab5a54196206479f30add3e9"} Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.193319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8e13de8c254a91a8b6d3a77c97a8e0bf523677948ccfb4eaa7d89f4c22510004"} Jan 31 14:58:52 crc kubenswrapper[4763]: I0131 14:58:52.193333 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e568897626546bb8112884f3788ed3d71e6541529265ad71b5c66893c2db116d"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199486 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11513eb82f391a20aebebd87396db7125c8521123ab258751f2b8d497f4e167f"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199526 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8a2a810430ac3145318caf57bfafd2acfe3ceb2117e75b0c71af39aa051b8df8"} Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199650 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199750 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:53 crc kubenswrapper[4763]: I0131 14:58:53.199776 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.059354 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.059863 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.067888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.394980 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.737795 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:55 crc kubenswrapper[4763]: I0131 14:58:55.742104 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:58:58 crc kubenswrapper[4763]: I0131 14:58:58.216281 4763 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:58:58 crc kubenswrapper[4763]: I0131 14:58:58.348578 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.236801 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.236834 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244168 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244764 4763 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://e568897626546bb8112884f3788ed3d71e6541529265ad71b5c66893c2db116d" Jan 31 14:58:59 crc kubenswrapper[4763]: I0131 14:58:59.244788 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.244404 4763 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.244448 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="72708d98-90a0-4456-bdbb-6ccdf80bd45f" Jan 31 14:59:00 crc kubenswrapper[4763]: I0131 14:59:00.249781 4763 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ebd3e29f-bf35-46d2-9605-e8963646b845" Jan 31 14:59:05 crc kubenswrapper[4763]: I0131 14:59:05.400897 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.130229 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.260131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.487943 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.622459 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:59:08 crc kubenswrapper[4763]: I0131 14:59:08.650580 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.126020 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.211075 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.231330 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.346590 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.353581 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.467012 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.474792 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.545511 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.705874 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.888067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.975067 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:59:09 crc kubenswrapper[4763]: I0131 14:59:09.976848 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.021010 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.170527 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.315788 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.378165 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.481289 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.547552 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.597385 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.878078 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.977131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:59:10 crc kubenswrapper[4763]: I0131 14:59:10.991791 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.057324 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.083626 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.200241 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.275038 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.397945 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.402315 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.407108 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.464837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.547013 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.568879 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.592411 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.635011 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.692243 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.867260 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.880146 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:59:11 crc kubenswrapper[4763]: I0131 14:59:11.998939 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.032531 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.098229 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.101512 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.105033 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.135048 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.229533 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.286741 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.329973 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.439462 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.486517 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.505870 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.516486 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.609104 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.686979 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.698566 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.737393 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.864790 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.896434 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:59:12 crc kubenswrapper[4763]: I0131 14:59:12.984011 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.164447 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.197390 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.202546 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.202860 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.222331 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.403895 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.520164 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.544092 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.558280 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.569687 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.631830 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.667409 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.719728 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.746131 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.758725 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.862996 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.871017 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:59:13 crc kubenswrapper[4763]: I0131 14:59:13.902955 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510306 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510326 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.510550 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.517894 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518217 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518368 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.518788 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.519017 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.519817 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.523818 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524184 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524441 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.524860 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.525028 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.525155 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.537670 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.554085 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.628605 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.762216 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.776684 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.804141 4763 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.844686 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.933559 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:59:14 crc kubenswrapper[4763]: I0131 14:59:14.936602 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.083140 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.178808 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.355397 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.521725 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.578914 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.644166 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.739065 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.768677 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.932645 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:59:15 crc kubenswrapper[4763]: I0131 14:59:15.963972 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.035751 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.068793 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.068826 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.070045 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.446498 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.487910 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.496426 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.534501 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.549935 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.566020 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.600280 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.602066 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.611343 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.807969 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.819812 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.872832 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:59:16 crc kubenswrapper[4763]: I0131 14:59:16.993224 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.059483 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.124381 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.136900 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.169807 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.196520 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.219966 4763 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224377 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.224357025 podStartE2EDuration="43.224357025s" podCreationTimestamp="2026-01-31 14:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:58.335234418 +0000 UTC m=+258.089972711" watchObservedRunningTime="2026-01-31 14:59:17.224357025 +0000 UTC m=+276.979095318" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224790 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.224830 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.228306 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.263646 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.263624403 podStartE2EDuration="19.263624403s" podCreationTimestamp="2026-01-31 14:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:59:17.242332343 +0000 UTC m=+276.997070676" watchObservedRunningTime="2026-01-31 14:59:17.263624403 +0000 UTC m=+277.018362696" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.309477 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.360633 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.491668 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.572466 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.627009 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.709622 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.745750 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.753239 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.798542 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:59:17 crc kubenswrapper[4763]: I0131 14:59:17.898741 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.034157 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.160035 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.165234 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.241462 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.257483 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.260814 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.264644 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.272212 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.304641 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.340847 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.465605 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.570501 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.615753 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.644194 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.648126 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.730287 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.730386 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.739177 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.742528 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.819541 4763 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.851157 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.931388 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.979808 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:59:18 crc kubenswrapper[4763]: I0131 14:59:18.981965 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.136468 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.167479 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.172894 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.251423 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.273219 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.357869 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.427655 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.470940 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.471607 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.557555 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.595143 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.709514 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.710168 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.795760 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.872690 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:59:19 crc kubenswrapper[4763]: I0131 14:59:19.996890 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.064983 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.106922 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.184138 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.379340 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.430179 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.445645 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.490047 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.528208 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.597098 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.617540 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.649868 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.804850 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.823471 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.834476 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.836246 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.839758 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.853110 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.858318 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.889539 4763 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.889839 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" gracePeriod=5 Jan 31 14:59:20 crc kubenswrapper[4763]: I0131 14:59:20.904161 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.067345 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.080494 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.185505 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.342484 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.367346 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.477068 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.538136 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.549933 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.579620 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.631944 4763 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.745426 4763 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:21 crc kubenswrapper[4763]: I0131 14:59:21.989321 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.001085 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.118657 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.325916 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.552837 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.583400 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:59:22 crc kubenswrapper[4763]: I0131 14:59:22.616914 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.124142 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.283268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.342429 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.349790 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.488178 4763 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.710566 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:59:23 crc kubenswrapper[4763]: I0131 14:59:23.822643 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:59:24 crc kubenswrapper[4763]: I0131 14:59:24.034839 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.406859 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.406941 4763 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" exitCode=137 Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.468348 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.468764 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641348 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641439 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641431 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641534 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641599 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641630 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641654 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641745 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641840 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.641996 4763 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642011 4763 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642023 4763 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.642033 4763 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.652423 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:59:26 crc kubenswrapper[4763]: I0131 14:59:26.742515 4763 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.063561 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.064043 4763 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.075552 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.075616 4763 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a0f5c67-c726-4345-9630-e2b665ce511c" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.079614 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.079922 4763 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="0a0f5c67-c726-4345-9630-e2b665ce511c" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415068 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415225 4763 scope.go:117] "RemoveContainer" containerID="7c5454adb66dbfe52e542cda6188d149bbe0998a17793a877cd0c2d065f9ddf8" Jan 31 14:59:27 crc kubenswrapper[4763]: I0131 14:59:27.415465 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:59:34 crc kubenswrapper[4763]: I0131 14:59:34.402061 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.348315 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489156 4763 generic.go:334] "Generic (PLEG): container finished" podID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" exitCode=0 Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489199 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.489862 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 14:59:39 crc kubenswrapper[4763]: I0131 14:59:39.782218 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.498588 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerStarted","Data":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.499119 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.503070 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 14:59:40 crc kubenswrapper[4763]: I0131 14:59:40.829831 4763 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 14:59:41 crc kubenswrapper[4763]: I0131 14:59:41.473466 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:59:42 crc kubenswrapper[4763]: I0131 14:59:42.561019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.128180 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.798676 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:59:43 crc kubenswrapper[4763]: I0131 14:59:43.973845 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:59:45 crc kubenswrapper[4763]: I0131 14:59:45.850367 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:59:46 crc kubenswrapper[4763]: I0131 14:59:46.574164 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:59:46 crc kubenswrapper[4763]: I0131 14:59:46.929689 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:59:48 crc kubenswrapper[4763]: I0131 14:59:48.539474 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:59:49 crc kubenswrapper[4763]: I0131 14:59:49.762199 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:59:52 crc kubenswrapper[4763]: I0131 14:59:52.543594 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.291424 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.529046 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:59:53 crc kubenswrapper[4763]: I0131 14:59:53.646796 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:59:56 crc kubenswrapper[4763]: I0131 14:59:56.943279 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.190661 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:00 crc kubenswrapper[4763]: E0131 15:00:00.191284 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191302 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: E0131 15:00:00.191325 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191335 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191477 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e242425d-c262-45a8-b933-a84abec6740e" containerName="installer" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.191502 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.192018 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.194487 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.196259 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201412 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.201433 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.204045 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302269 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.302919 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.304202 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.314455 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.339327 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"collect-profiles-29497860-lhblx\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:00 crc kubenswrapper[4763]: I0131 15:00:00.513384 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:02 crc kubenswrapper[4763]: I0131 15:00:02.065078 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 15:00:03 crc kubenswrapper[4763]: I0131 15:00:03.104399 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710062 4763 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710534 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710555 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 31 15:00:03 crc kubenswrapper[4763]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede" Netns:"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod "collect-profiles-29497860-lhblx" not found Jan 31 15:00:03 crc kubenswrapper[4763]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 31 15:00:03 crc kubenswrapper[4763]: > pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:03 crc kubenswrapper[4763]: E0131 15:00:03.710615 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager(77150388-1064-46e6-9636-ebfa1eacf88f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager(77150388-1064-46e6-9636-ebfa1eacf88f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29497860-lhblx_openshift-operator-lifecycle-manager_77150388-1064-46e6-9636-ebfa1eacf88f_0(8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede): error adding pod openshift-operator-lifecycle-manager_collect-profiles-29497860-lhblx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede\\\" Netns:\\\"/var/run/netns/82607e9f-932d-4c94-8fbc-d492a7b54d60\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-29497860-lhblx;K8S_POD_INFRA_CONTAINER_ID=8deeefd78b24379a86142eb29db8225dfd067b98a0bd2ac5d876e5b923c12ede;K8S_POD_UID=77150388-1064-46e6-9636-ebfa1eacf88f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx/77150388-1064-46e6-9636-ebfa1eacf88f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-29497860-lhblx in out of cluster comm: pod \\\"collect-profiles-29497860-lhblx\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" Jan 31 15:00:04 crc kubenswrapper[4763]: I0131 15:00:04.636219 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:04 crc kubenswrapper[4763]: I0131 15:00:04.636887 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.252535 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx"] Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655498 4763 generic.go:334] "Generic (PLEG): container finished" podID="77150388-1064-46e6-9636-ebfa1eacf88f" containerID="32d38ac7a45f00c53daa0635462129846e750f14e15f4abb1ec6484cb9065078" exitCode=0 Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerDied","Data":"32d38ac7a45f00c53daa0635462129846e750f14e15f4abb1ec6484cb9065078"} Jan 31 15:00:07 crc kubenswrapper[4763]: I0131 15:00:07.655840 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerStarted","Data":"0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398"} Jan 31 15:00:08 crc kubenswrapper[4763]: I0131 15:00:08.976150 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117248 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117291 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.117336 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") pod \"77150388-1064-46e6-9636-ebfa1eacf88f\" (UID: \"77150388-1064-46e6-9636-ebfa1eacf88f\") " Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.118197 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume" (OuterVolumeSpecName: "config-volume") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.123748 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.124896 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9" (OuterVolumeSpecName: "kube-api-access-h8jm9") pod "77150388-1064-46e6-9636-ebfa1eacf88f" (UID: "77150388-1064-46e6-9636-ebfa1eacf88f"). InnerVolumeSpecName "kube-api-access-h8jm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218834 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77150388-1064-46e6-9636-ebfa1eacf88f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218873 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77150388-1064-46e6-9636-ebfa1eacf88f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.218887 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8jm9\" (UniqueName: \"kubernetes.io/projected/77150388-1064-46e6-9636-ebfa1eacf88f-kube-api-access-h8jm9\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" event={"ID":"77150388-1064-46e6-9636-ebfa1eacf88f","Type":"ContainerDied","Data":"0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398"} Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668680 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa8f4ac38d1afcc87fa2d3639b470eb20c3d54da8f24f5a5f3dca69a36f6398" Jan 31 15:00:09 crc kubenswrapper[4763]: I0131 15:00:09.668457 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-lhblx" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.758687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:41 crc kubenswrapper[4763]: E0131 15:00:41.759251 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759263 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759365 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="77150388-1064-46e6-9636-ebfa1eacf88f" containerName="collect-profiles" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.759742 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.774211 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866343 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866378 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866403 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866513 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866590 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866625 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.866675 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.892630 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968246 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968320 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968387 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968435 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968457 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968484 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.968978 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9b3ec224-3471-48bb-a15e-e4a6d5635279-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.970202 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-trusted-ca\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.970227 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-certificates\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.975024 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-registry-tls\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.976276 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9b3ec224-3471-48bb-a15e-e4a6d5635279-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.987178 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-bound-sa-token\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:41 crc kubenswrapper[4763]: I0131 15:00:41.993227 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2j8n\" (UniqueName: \"kubernetes.io/projected/9b3ec224-3471-48bb-a15e-e4a6d5635279-kube-api-access-z2j8n\") pod \"image-registry-66df7c8f76-8bf77\" (UID: \"9b3ec224-3471-48bb-a15e-e4a6d5635279\") " pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.081018 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.543929 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8bf77"] Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.861659 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" event={"ID":"9b3ec224-3471-48bb-a15e-e4a6d5635279","Type":"ContainerStarted","Data":"b7413ced75d10a5e383b9f6e94e8568213cb32cdf9bf5ce5a3ef72224d47b796"} Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.862022 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" event={"ID":"9b3ec224-3471-48bb-a15e-e4a6d5635279","Type":"ContainerStarted","Data":"0f9ffcfe233eade7219ea2e3b8ef877b660d7505c1f1bdcf35ad12638cd4e3e4"} Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.862055 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:00:42 crc kubenswrapper[4763]: I0131 15:00:42.882920 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" podStartSLOduration=1.882896836 podStartE2EDuration="1.882896836s" podCreationTimestamp="2026-01-31 15:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:42.881483107 +0000 UTC m=+362.636221400" watchObservedRunningTime="2026-01-31 15:00:42.882896836 +0000 UTC m=+362.637635179" Jan 31 15:00:44 crc kubenswrapper[4763]: I0131 15:00:44.177399 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:00:44 crc kubenswrapper[4763]: I0131 15:00:44.177729 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.842327 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.843090 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k6ddv" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" containerID="cri-o://b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.860194 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.860475 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9df4p" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" containerID="cri-o://34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.877483 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.877954 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" containerID="cri-o://94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.885597 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.886145 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wnmmq" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" containerID="cri-o://0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.901222 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.901656 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wxzg8" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" containerID="cri-o://6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" gracePeriod=30 Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.916754 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.918107 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.928576 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987087 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987484 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:49 crc kubenswrapper[4763]: I0131 15:00:49.987522 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090525 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090624 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.090832 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.091928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.099087 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.108339 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6df\" (UniqueName: \"kubernetes.io/projected/38baa8fd-7b8e-4c7b-ac03-d739f10d242a-kube-api-access-vp6df\") pod \"marketplace-operator-79b997595-gg2dq\" (UID: \"38baa8fd-7b8e-4c7b-ac03-d739f10d242a\") " pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.313569 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.332195 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.348374 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.349094 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.349445 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.355286 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400363 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.400486 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") pod \"b8a35a73-67a0-4bb4-9954-46350d31b017\" (UID: \"b8a35a73-67a0-4bb4-9954-46350d31b017\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.402317 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities" (OuterVolumeSpecName: "utilities") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.404400 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln" (OuterVolumeSpecName: "kube-api-access-vk9ln") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "kube-api-access-vk9ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.488619 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8a35a73-67a0-4bb4-9954-46350d31b017" (UID: "b8a35a73-67a0-4bb4-9954-46350d31b017"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.504903 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505074 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505106 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505171 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505197 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") pod \"2434f0b9-846a-444c-b487-745d4010002b\" (UID: \"2434f0b9-846a-444c-b487-745d4010002b\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505322 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") pod \"5f3cc890-2041-4983-8501-088c40c22b77\" (UID: \"5f3cc890-2041-4983-8501-088c40c22b77\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") pod \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\" (UID: \"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505547 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.505600 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") pod \"5c097873-7ca4-491d-86c4-31b2ab99d63d\" (UID: \"5c097873-7ca4-491d-86c4-31b2ab99d63d\") " Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506002 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk9ln\" (UniqueName: \"kubernetes.io/projected/b8a35a73-67a0-4bb4-9954-46350d31b017-kube-api-access-vk9ln\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506022 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506034 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8a35a73-67a0-4bb4-9954-46350d31b017-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506125 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities" (OuterVolumeSpecName: "utilities") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.506271 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities" (OuterVolumeSpecName: "utilities") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.508305 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.510169 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities" (OuterVolumeSpecName: "utilities") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.511530 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh" (OuterVolumeSpecName: "kube-api-access-7bxzh") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "kube-api-access-7bxzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.514478 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2" (OuterVolumeSpecName: "kube-api-access-rrvp2") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "kube-api-access-rrvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.516209 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.521815 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx" (OuterVolumeSpecName: "kube-api-access-sfdsx") pod "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" (UID: "c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc"). InnerVolumeSpecName "kube-api-access-sfdsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.523917 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25" (OuterVolumeSpecName: "kube-api-access-tzr25") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "kube-api-access-tzr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.543595 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2434f0b9-846a-444c-b487-745d4010002b" (UID: "2434f0b9-846a-444c-b487-745d4010002b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.583508 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c097873-7ca4-491d-86c4-31b2ab99d63d" (UID: "5c097873-7ca4-491d-86c4-31b2ab99d63d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608240 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608319 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608352 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608382 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bxzh\" (UniqueName: \"kubernetes.io/projected/5f3cc890-2041-4983-8501-088c40c22b77-kube-api-access-7bxzh\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608405 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2434f0b9-846a-444c-b487-745d4010002b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608435 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdsx\" (UniqueName: \"kubernetes.io/projected/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-kube-api-access-sfdsx\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608450 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608466 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzr25\" (UniqueName: \"kubernetes.io/projected/2434f0b9-846a-444c-b487-745d4010002b-kube-api-access-tzr25\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608483 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrvp2\" (UniqueName: \"kubernetes.io/projected/5c097873-7ca4-491d-86c4-31b2ab99d63d-kube-api-access-rrvp2\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608514 4763 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.608527 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c097873-7ca4-491d-86c4-31b2ab99d63d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.638006 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f3cc890-2041-4983-8501-088c40c22b77" (UID: "5f3cc890-2041-4983-8501-088c40c22b77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.709472 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3cc890-2041-4983-8501-088c40c22b77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.762674 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gg2dq"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925356 4763 generic.go:334] "Generic (PLEG): container finished" podID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925463 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925504 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" event={"ID":"c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc","Type":"ContainerDied","Data":"ed7d3da6199e8bb4c55e177b1afca8ac78c017a1ea997eff233008f48616b7c8"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925527 4763 scope.go:117] "RemoveContainer" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.925714 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flcgf" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933292 4763 generic.go:334] "Generic (PLEG): container finished" podID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933342 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9df4p" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.933404 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9df4p" event={"ID":"5c097873-7ca4-491d-86c4-31b2ab99d63d","Type":"ContainerDied","Data":"1eaa5c467faffeb2ba7ad8dc241225ca0c8240c2cf3a8e19cde7c5ee1bfecc47"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937366 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f3cc890-2041-4983-8501-088c40c22b77" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937504 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wxzg8" event={"ID":"5f3cc890-2041-4983-8501-088c40c22b77","Type":"ContainerDied","Data":"2abcdafc7fd1d0b73e0182854de8d66e2f3700062dfd88e6ccab246a85b4c70b"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.937622 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wxzg8" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.947245 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.951899 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" event={"ID":"38baa8fd-7b8e-4c7b-ac03-d739f10d242a","Type":"ContainerStarted","Data":"31865f0c5dac807dfa0bc8cda96625a8500065ea786aab87d44f2a49709e828d"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.951941 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" event={"ID":"38baa8fd-7b8e-4c7b-ac03-d739f10d242a","Type":"ContainerStarted","Data":"4449ccdd6773450d3c9ebbd3372c0b744e31f6b2aefbcef2cee13410cfd7c936"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.967496 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.971380 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9df4p"] Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973399 4763 generic.go:334] "Generic (PLEG): container finished" podID="2434f0b9-846a-444c-b487-745d4010002b" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973459 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973484 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wnmmq" event={"ID":"2434f0b9-846a-444c-b487-745d4010002b","Type":"ContainerDied","Data":"3d06c6fc284feb92aefe314de6491da4d8ed4752eabe5f4dc5c6709aa6023802"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.973557 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wnmmq" Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984402 4763 generic.go:334] "Generic (PLEG): container finished" podID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" exitCode=0 Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984458 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984486 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k6ddv" event={"ID":"b8a35a73-67a0-4bb4-9954-46350d31b017","Type":"ContainerDied","Data":"6f670464716ecf8ab5d99a2382a3bcaf7162a13bd03fa816cb2c7b4734ade299"} Jan 31 15:00:50 crc kubenswrapper[4763]: I0131 15:00:50.984572 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k6ddv" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.000149 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" podStartSLOduration=2.000126433 podStartE2EDuration="2.000126433s" podCreationTimestamp="2026-01-31 15:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:50.986976229 +0000 UTC m=+370.741714522" watchObservedRunningTime="2026-01-31 15:00:51.000126433 +0000 UTC m=+370.754864726" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.009994 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.024677 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flcgf"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.028443 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.034763 4763 scope.go:117] "RemoveContainer" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.035010 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wxzg8"] Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.037167 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": container with ID starting with 94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445 not found: ID does not exist" containerID="94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037205 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445"} err="failed to get container status \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": rpc error: code = NotFound desc = could not find container \"94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445\": container with ID starting with 94c37c96940e6c7718ef7457db1b93bffde516c75beb21cabacd703afe7df445 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037232 4763 scope.go:117] "RemoveContainer" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.037571 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": container with ID starting with f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca not found: ID does not exist" containerID="f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037604 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca"} err="failed to get container status \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": rpc error: code = NotFound desc = could not find container \"f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca\": container with ID starting with f08a6750497f2109f855a8140586391b641ccc75c075f25e6e29d0621151a5ca not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.037634 4763 scope.go:117] "RemoveContainer" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.054217 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" path="/var/lib/kubelet/pods/5c097873-7ca4-491d-86c4-31b2ab99d63d/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.055241 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3cc890-2041-4983-8501-088c40c22b77" path="/var/lib/kubelet/pods/5f3cc890-2041-4983-8501-088c40c22b77/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.056891 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" path="/var/lib/kubelet/pods/c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc/volumes" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.060984 4763 scope.go:117] "RemoveContainer" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.068406 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.080874 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k6ddv"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.082077 4763 scope.go:117] "RemoveContainer" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.085980 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.089730 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wnmmq"] Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097168 4763 scope.go:117] "RemoveContainer" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.097687 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": container with ID starting with 34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5 not found: ID does not exist" containerID="34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097739 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5"} err="failed to get container status \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": rpc error: code = NotFound desc = could not find container \"34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5\": container with ID starting with 34c7f9fccb1c26a38501bfbea4a61629615e747ea42bb481450c6e5c4a088aa5 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.097768 4763 scope.go:117] "RemoveContainer" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.098115 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": container with ID starting with 8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b not found: ID does not exist" containerID="8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098194 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b"} err="failed to get container status \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": rpc error: code = NotFound desc = could not find container \"8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b\": container with ID starting with 8ba1d4757c04aeb31be5c80fe8db01bdf4d7160223329e1b7ec3fde4e61ae64b not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098236 4763 scope.go:117] "RemoveContainer" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.098571 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": container with ID starting with 8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc not found: ID does not exist" containerID="8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098602 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc"} err="failed to get container status \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": rpc error: code = NotFound desc = could not find container \"8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc\": container with ID starting with 8e4ac171d8102521507f2ca735b9c180604f975fcecb70e2eebf9141808b96cc not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.098617 4763 scope.go:117] "RemoveContainer" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.113997 4763 scope.go:117] "RemoveContainer" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.139196 4763 scope.go:117] "RemoveContainer" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.156533 4763 scope.go:117] "RemoveContainer" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.157023 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": container with ID starting with 6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1 not found: ID does not exist" containerID="6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157085 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1"} err="failed to get container status \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": rpc error: code = NotFound desc = could not find container \"6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1\": container with ID starting with 6979898503c3db1e52b9a9085e8ed5e42c42873bedab7efaef4db47c207a2bd1 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157127 4763 scope.go:117] "RemoveContainer" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.157728 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": container with ID starting with 92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807 not found: ID does not exist" containerID="92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157786 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807"} err="failed to get container status \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": rpc error: code = NotFound desc = could not find container \"92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807\": container with ID starting with 92f6b12aae342762d6c8699d9c674ad8822a3f42d0a32e49922d1a4c6fd48807 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.157816 4763 scope.go:117] "RemoveContainer" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.158128 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": container with ID starting with 427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18 not found: ID does not exist" containerID="427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.158149 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18"} err="failed to get container status \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": rpc error: code = NotFound desc = could not find container \"427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18\": container with ID starting with 427edc0ea2301ea88efdfc1bb087b1cc85c3e9bc24e47c82fe06ab3272a59f18 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.158162 4763 scope.go:117] "RemoveContainer" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.175593 4763 scope.go:117] "RemoveContainer" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.190458 4763 scope.go:117] "RemoveContainer" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.205628 4763 scope.go:117] "RemoveContainer" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.206527 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": container with ID starting with 0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97 not found: ID does not exist" containerID="0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.206566 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97"} err="failed to get container status \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": rpc error: code = NotFound desc = could not find container \"0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97\": container with ID starting with 0711770c6696d1677cd00d3fae5f53e4c8b8978c7d7d63abe1ed9abf767c4b97 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.206605 4763 scope.go:117] "RemoveContainer" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.206982 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": container with ID starting with f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1 not found: ID does not exist" containerID="f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207056 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1"} err="failed to get container status \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": rpc error: code = NotFound desc = could not find container \"f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1\": container with ID starting with f4f94150c6f32dbe928669679c8121afb9a283014f70e46bdad6544c4f2849b1 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207102 4763 scope.go:117] "RemoveContainer" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.207470 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": container with ID starting with 05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579 not found: ID does not exist" containerID="05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207505 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579"} err="failed to get container status \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": rpc error: code = NotFound desc = could not find container \"05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579\": container with ID starting with 05d8ffb53e31709980a6ae08442bf22077516244ce4226b0669c91b854ed1579 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.207531 4763 scope.go:117] "RemoveContainer" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.239846 4763 scope.go:117] "RemoveContainer" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.263160 4763 scope.go:117] "RemoveContainer" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.291771 4763 scope.go:117] "RemoveContainer" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292091 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": container with ID starting with b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed not found: ID does not exist" containerID="b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292145 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed"} err="failed to get container status \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": rpc error: code = NotFound desc = could not find container \"b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed\": container with ID starting with b11296a17e50e5a5d66eaf92faf74300be0a5405e0878ffb5ba19adff4be04ed not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292183 4763 scope.go:117] "RemoveContainer" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292533 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": container with ID starting with b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239 not found: ID does not exist" containerID="b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292594 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239"} err="failed to get container status \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": rpc error: code = NotFound desc = could not find container \"b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239\": container with ID starting with b8d4bf7575d1a0a38daa94164fa783e10d0f4c3b29cd1769b324fe728df81239 not found: ID does not exist" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292634 4763 scope.go:117] "RemoveContainer" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: E0131 15:00:51.292909 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": container with ID starting with f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d not found: ID does not exist" containerID="f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d" Jan 31 15:00:51 crc kubenswrapper[4763]: I0131 15:00:51.292940 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d"} err="failed to get container status \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": rpc error: code = NotFound desc = could not find container \"f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d\": container with ID starting with f819b8c865363bd5ff66e2069c6d5174ad8675d68262ee17b08a94b17937547d not found: ID does not exist" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.016208 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.020402 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gg2dq" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.068721 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.068983 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.068999 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069018 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069025 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069036 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069044 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069056 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069064 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069077 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069084 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069095 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069102 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069113 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069121 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069130 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069138 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069145 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069153 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069163 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069170 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069180 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069187 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="extract-content" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069199 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069206 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="extract-utilities" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069217 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069224 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: E0131 15:00:52.069233 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069241 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069363 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="2434f0b9-846a-444c-b487-745d4010002b" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069455 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3cc890-2041-4983-8501-088c40c22b77" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069480 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069493 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10eb6e4-b5d9-4be9-8f40-99a9bb0f48fc" containerName="marketplace-operator" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069508 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c097873-7ca4-491d-86c4-31b2ab99d63d" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.069520 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" containerName="registry-server" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.070389 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.074284 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.086127 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231046 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231186 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.231242 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.259798 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.261165 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.263817 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.292343 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.332520 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.332915 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333038 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333136 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333238 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333621 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-utilities\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.333657 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d877abcd-9d8f-4597-b41c-4026d954cc62-catalog-content\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.358375 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48hzt\" (UniqueName: \"kubernetes.io/projected/d877abcd-9d8f-4597-b41c-4026d954cc62-kube-api-access-48hzt\") pod \"redhat-marketplace-znznv\" (UID: \"d877abcd-9d8f-4597-b41c-4026d954cc62\") " pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.416527 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434343 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.434634 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.435099 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-utilities\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.435226 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751420d5-1809-406a-bef8-8e4015d9763b-catalog-content\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.458146 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvtnf\" (UniqueName: \"kubernetes.io/projected/751420d5-1809-406a-bef8-8e4015d9763b-kube-api-access-qvtnf\") pod \"redhat-operators-gshr8\" (UID: \"751420d5-1809-406a-bef8-8e4015d9763b\") " pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.580129 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.650186 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znznv"] Jan 31 15:00:52 crc kubenswrapper[4763]: W0131 15:00:52.658481 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd877abcd_9d8f_4597_b41c_4026d954cc62.slice/crio-3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1 WatchSource:0}: Error finding container 3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1: Status 404 returned error can't find the container with id 3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1 Jan 31 15:00:52 crc kubenswrapper[4763]: I0131 15:00:52.781486 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gshr8"] Jan 31 15:00:52 crc kubenswrapper[4763]: W0131 15:00:52.800926 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod751420d5_1809_406a_bef8_8e4015d9763b.slice/crio-7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1 WatchSource:0}: Error finding container 7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1: Status 404 returned error can't find the container with id 7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.022105 4763 generic.go:334] "Generic (PLEG): container finished" podID="751420d5-1809-406a-bef8-8e4015d9763b" containerID="ddcaf533e632566e5c4269beae815a92218a430b3b9db331c48179a6b13b0cd9" exitCode=0 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.022357 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerDied","Data":"ddcaf533e632566e5c4269beae815a92218a430b3b9db331c48179a6b13b0cd9"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.023922 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"7cb99f8b81c61afa0cf480506b6a47630b9cfbdd85e3616597b786216fca87e1"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.025993 4763 generic.go:334] "Generic (PLEG): container finished" podID="d877abcd-9d8f-4597-b41c-4026d954cc62" containerID="c117fec153a7942bbf26ced795ff3a5bad0a2f3b3312d724c2164f2f2a60e711" exitCode=0 Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.026208 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerDied","Data":"c117fec153a7942bbf26ced795ff3a5bad0a2f3b3312d724c2164f2f2a60e711"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.026859 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"3b2a5091f963da8fc1e5440cf7010aa5220b66eef122ef1654b140b520ddeff1"} Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.054515 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2434f0b9-846a-444c-b487-745d4010002b" path="/var/lib/kubelet/pods/2434f0b9-846a-444c-b487-745d4010002b/volumes" Jan 31 15:00:53 crc kubenswrapper[4763]: I0131 15:00:53.056069 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a35a73-67a0-4bb4-9954-46350d31b017" path="/var/lib/kubelet/pods/b8a35a73-67a0-4bb4-9954-46350d31b017/volumes" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.035145 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6"} Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.038198 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1"} Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.462242 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.464130 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.466448 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.479449 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565477 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565574 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.565789 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.660432 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.661579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.663721 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.666804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.667350 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-catalog-content\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.667516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-utilities\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.670191 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.733501 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7h8\" (UniqueName: \"kubernetes.io/projected/5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3-kube-api-access-kz7h8\") pod \"certified-operators-frpn9\" (UID: \"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3\") " pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768573 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768633 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.768668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.790594 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870712 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.870737 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.872089 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-catalog-content\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.872397 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f6ea13-f993-4138-b5d5-a549e9aae21b-utilities\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:54 crc kubenswrapper[4763]: I0131 15:00:54.895626 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pkz\" (UniqueName: \"kubernetes.io/projected/e2f6ea13-f993-4138-b5d5-a549e9aae21b-kube-api-access-q6pkz\") pod \"community-operators-v59tf\" (UID: \"e2f6ea13-f993-4138-b5d5-a549e9aae21b\") " pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.003813 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-frpn9"] Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.049064 4763 generic.go:334] "Generic (PLEG): container finished" podID="751420d5-1809-406a-bef8-8e4015d9763b" containerID="bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6" exitCode=0 Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.056551 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerDied","Data":"bf763c3e12e9511d327e71d4e25b967bef5da397d1f97369da781f44007446f6"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.060382 4763 generic.go:334] "Generic (PLEG): container finished" podID="d877abcd-9d8f-4597-b41c-4026d954cc62" containerID="1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1" exitCode=0 Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.060462 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerDied","Data":"1553f6bb3bbf0575af47634359e53323fc49b2bbe8d6197bb975720bff6376b1"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.063839 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"692f822dcf6f33c7e4f1099ee2bd24b697bb0e3c08db689dabfdb8feb4b46a43"} Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.079178 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:00:55 crc kubenswrapper[4763]: I0131 15:00:55.240267 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v59tf"] Jan 31 15:00:55 crc kubenswrapper[4763]: W0131 15:00:55.243435 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f6ea13_f993_4138_b5d5_a549e9aae21b.slice/crio-e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225 WatchSource:0}: Error finding container e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225: Status 404 returned error can't find the container with id e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.071624 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gshr8" event={"ID":"751420d5-1809-406a-bef8-8e4015d9763b","Type":"ContainerStarted","Data":"948be45851843087686767c5b56e2655ccdf6c3a5bdb938c8d82cc9d38346b16"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.073843 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znznv" event={"ID":"d877abcd-9d8f-4597-b41c-4026d954cc62","Type":"ContainerStarted","Data":"8330978aa0524b41cae166dfabefdd6921a78a08c86af0bcad387d3f5b44e71b"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.075637 4763 generic.go:334] "Generic (PLEG): container finished" podID="5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3" containerID="c4a0f5cb76157a8898af1357d6f56134d4c3e3555d17bcd012e8ca76746b4a8f" exitCode=0 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.075723 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerDied","Data":"c4a0f5cb76157a8898af1357d6f56134d4c3e3555d17bcd012e8ca76746b4a8f"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080173 4763 generic.go:334] "Generic (PLEG): container finished" podID="e2f6ea13-f993-4138-b5d5-a549e9aae21b" containerID="8044f0e198e2d99c1aaa9156f169d2b8901e85bd0917010f8c419488c46c24dc" exitCode=0 Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080305 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerDied","Data":"8044f0e198e2d99c1aaa9156f169d2b8901e85bd0917010f8c419488c46c24dc"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.080410 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"e68a120ea9937e6c9fc25705105b6768a0da7188b3c8199ec831cf1dfc15e225"} Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.092511 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gshr8" podStartSLOduration=1.673847795 podStartE2EDuration="4.09249322s" podCreationTimestamp="2026-01-31 15:00:52 +0000 UTC" firstStartedPulling="2026-01-31 15:00:53.024013242 +0000 UTC m=+372.778751525" lastFinishedPulling="2026-01-31 15:00:55.442658657 +0000 UTC m=+375.197396950" observedRunningTime="2026-01-31 15:00:56.089412745 +0000 UTC m=+375.844151038" watchObservedRunningTime="2026-01-31 15:00:56.09249322 +0000 UTC m=+375.847231523" Jan 31 15:00:56 crc kubenswrapper[4763]: I0131 15:00:56.103944 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znznv" podStartSLOduration=1.628491288 podStartE2EDuration="4.103925556s" podCreationTimestamp="2026-01-31 15:00:52 +0000 UTC" firstStartedPulling="2026-01-31 15:00:53.028103435 +0000 UTC m=+372.782841728" lastFinishedPulling="2026-01-31 15:00:55.503537693 +0000 UTC m=+375.258275996" observedRunningTime="2026-01-31 15:00:56.102068075 +0000 UTC m=+375.856806368" watchObservedRunningTime="2026-01-31 15:00:56.103925556 +0000 UTC m=+375.858663839" Jan 31 15:00:57 crc kubenswrapper[4763]: I0131 15:00:57.087091 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd"} Jan 31 15:00:57 crc kubenswrapper[4763]: I0131 15:00:57.089678 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2"} Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.096448 4763 generic.go:334] "Generic (PLEG): container finished" podID="e2f6ea13-f993-4138-b5d5-a549e9aae21b" containerID="49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.096483 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerDied","Data":"49f16f1a31ca93e24b5688517baec17736c9cde4ac9ee4d45b3fd5b8775345c2"} Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.099944 4763 generic.go:334] "Generic (PLEG): container finished" podID="5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3" containerID="0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd" exitCode=0 Jan 31 15:00:58 crc kubenswrapper[4763]: I0131 15:00:58.100052 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerDied","Data":"0efb032f5cfdb116df53fa2b4c6faf783f8ef21eb09b90eb902e145307ef4cbd"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.106427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v59tf" event={"ID":"e2f6ea13-f993-4138-b5d5-a549e9aae21b","Type":"ContainerStarted","Data":"e0aaf7ff1cd67de7251956d8a7e13c080b0fe0d719d8e0ff2adf68d5f685453a"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.108642 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-frpn9" event={"ID":"5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3","Type":"ContainerStarted","Data":"bb58dc372d93d6bccd825a875f095687a58a7ff096b90a26ce6224713fc42821"} Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.132071 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v59tf" podStartSLOduration=2.628991954 podStartE2EDuration="5.132051287s" podCreationTimestamp="2026-01-31 15:00:54 +0000 UTC" firstStartedPulling="2026-01-31 15:00:56.08199829 +0000 UTC m=+375.836736583" lastFinishedPulling="2026-01-31 15:00:58.585057623 +0000 UTC m=+378.339795916" observedRunningTime="2026-01-31 15:00:59.128515548 +0000 UTC m=+378.883253841" watchObservedRunningTime="2026-01-31 15:00:59.132051287 +0000 UTC m=+378.886789580" Jan 31 15:00:59 crc kubenswrapper[4763]: I0131 15:00:59.147864 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-frpn9" podStartSLOduration=2.707175099 podStartE2EDuration="5.147846394s" podCreationTimestamp="2026-01-31 15:00:54 +0000 UTC" firstStartedPulling="2026-01-31 15:00:56.077966708 +0000 UTC m=+375.832705001" lastFinishedPulling="2026-01-31 15:00:58.518638003 +0000 UTC m=+378.273376296" observedRunningTime="2026-01-31 15:00:59.144046339 +0000 UTC m=+378.898784632" watchObservedRunningTime="2026-01-31 15:00:59.147846394 +0000 UTC m=+378.902584687" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.089234 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8bf77" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.158602 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.416781 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.416921 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.469258 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.580592 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.580859 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:02 crc kubenswrapper[4763]: I0131 15:01:02.638729 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:03 crc kubenswrapper[4763]: I0131 15:01:03.174605 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gshr8" Jan 31 15:01:03 crc kubenswrapper[4763]: I0131 15:01:03.201885 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znznv" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.791438 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.791756 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:04 crc kubenswrapper[4763]: I0131 15:01:04.835339 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.080399 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.080860 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.117336 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.185954 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v59tf" Jan 31 15:01:05 crc kubenswrapper[4763]: I0131 15:01:05.187253 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-frpn9" Jan 31 15:01:14 crc kubenswrapper[4763]: I0131 15:01:14.178220 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:14 crc kubenswrapper[4763]: I0131 15:01:14.179271 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.193021 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" containerID="cri-o://361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" gracePeriod=30 Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.589260 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733207 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733337 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733367 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733428 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733572 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.733601 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") pod \"329bb364-3958-490e-b065-d2ce7ee1567d\" (UID: \"329bb364-3958-490e-b065-d2ce7ee1567d\") " Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.734370 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.734544 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.739047 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.739814 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.741216 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f" (OuterVolumeSpecName: "kube-api-access-2df2f") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "kube-api-access-2df2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.741245 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.749312 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.754273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "329bb364-3958-490e-b065-d2ce7ee1567d" (UID: "329bb364-3958-490e-b065-d2ce7ee1567d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835331 4763 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835393 4763 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/329bb364-3958-490e-b065-d2ce7ee1567d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835420 4763 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835445 4763 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835467 4763 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/329bb364-3958-490e-b065-d2ce7ee1567d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835490 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2df2f\" (UniqueName: \"kubernetes.io/projected/329bb364-3958-490e-b065-d2ce7ee1567d-kube-api-access-2df2f\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:27 crc kubenswrapper[4763]: I0131 15:01:27.835602 4763 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/329bb364-3958-490e-b065-d2ce7ee1567d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.292928 4763 generic.go:334] "Generic (PLEG): container finished" podID="329bb364-3958-490e-b065-d2ce7ee1567d" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" exitCode=0 Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.292982 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerDied","Data":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293025 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" event={"ID":"329bb364-3958-490e-b065-d2ce7ee1567d","Type":"ContainerDied","Data":"f6421d1d39f19dfe9997df0c879a0f9ff7802342de47df550a2b31d059ccd341"} Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293030 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dzr7c" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.293047 4763 scope.go:117] "RemoveContainer" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.319317 4763 scope.go:117] "RemoveContainer" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: E0131 15:01:28.319854 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": container with ID starting with 361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545 not found: ID does not exist" containerID="361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.319907 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545"} err="failed to get container status \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": rpc error: code = NotFound desc = could not find container \"361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545\": container with ID starting with 361b73b8a392606d46b72682540045b4df3a4bf83b068a8dab1993dc20255545 not found: ID does not exist" Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.347015 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:28 crc kubenswrapper[4763]: I0131 15:01:28.368128 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dzr7c"] Jan 31 15:01:29 crc kubenswrapper[4763]: I0131 15:01:29.054351 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" path="/var/lib/kubelet/pods/329bb364-3958-490e-b065-d2ce7ee1567d/volumes" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.177564 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.178898 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.178982 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.179758 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.179848 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" gracePeriod=600 Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402168 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" exitCode=0 Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402225 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b"} Jan 31 15:01:44 crc kubenswrapper[4763]: I0131 15:01:44.402303 4763 scope.go:117] "RemoveContainer" containerID="3a8d61e3c06aea811941e18d4150bbe171abfddfc8b9cdce70fe6e590961a6cb" Jan 31 15:01:45 crc kubenswrapper[4763]: I0131 15:01:45.412221 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} Jan 31 15:03:44 crc kubenswrapper[4763]: I0131 15:03:44.177848 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:03:44 crc kubenswrapper[4763]: I0131 15:03:44.178414 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:14 crc kubenswrapper[4763]: I0131 15:04:14.177098 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:04:14 crc kubenswrapper[4763]: I0131 15:04:14.177841 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177211 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177821 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.177888 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.178716 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.178823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" gracePeriod=600 Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636218 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" exitCode=0 Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636285 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629"} Jan 31 15:04:44 crc kubenswrapper[4763]: I0131 15:04:44.636336 4763 scope.go:117] "RemoveContainer" containerID="b55482ed2ed4c41f6d8a4ff378f658a5880b1596f1e3f6ef24577f1114937a3b" Jan 31 15:04:45 crc kubenswrapper[4763]: I0131 15:04:45.648775 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.447053 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448313 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" containerID="cri-o://b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448821 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" containerID="cri-o://3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448879 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" containerID="cri-o://92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448930 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" containerID="cri-o://67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448966 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" containerID="cri-o://897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.448989 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" containerID="cri-o://c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.449288 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" gracePeriod=30 Jan 31 15:05:38 crc kubenswrapper[4763]: I0131 15:05:38.485136 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" containerID="cri-o://0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" gracePeriod=30 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.003239 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004231 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/1.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004298 4763 generic.go:334] "Generic (PLEG): container finished" podID="2335d04f-10b2-4cf8-aae6-236650539c74" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" exitCode=2 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004416 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerDied","Data":"2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.004504 4763 scope.go:117] "RemoveContainer" containerID="ab1c5cd9f9fb249e12361258bc6e5e83c1ac27131f803a258757f251382f9d44" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.005469 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.006013 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.008614 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovnkube-controller/3.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.012505 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013233 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013726 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013766 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013783 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013782 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013884 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013799 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.013995 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014029 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" exitCode=0 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014051 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" exitCode=143 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014068 4763 generic.go:334] "Generic (PLEG): container finished" podID="047ce610-09fa-482b-8d29-45ad376d12b3" containerID="b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" exitCode=143 Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014094 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014133 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.014151 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8"} Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.046260 4763 scope.go:117] "RemoveContainer" containerID="c1d9a3481eb12d0b1658e4141e9fa237e24ea0a712155a96e32eb577f03cd862" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.195424 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.197264 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.198316 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257583 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mjppx"] Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257808 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257819 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257829 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257835 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257843 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257849 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257859 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257866 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257875 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257880 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257887 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257892 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257900 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257905 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257916 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257921 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257927 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257932 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257947 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257952 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257960 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257965 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257974 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kubecfg-setup" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257979 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kubecfg-setup" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257986 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.257991 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: E0131 15:05:39.257998 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258003 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258087 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="329bb364-3958-490e-b065-d2ce7ee1567d" containerName="registry" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258095 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258104 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-acl-logging" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258110 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258117 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="nbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258125 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-node" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258139 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258145 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="northd" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258153 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovn-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258159 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="sbdb" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258166 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258322 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.258331 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" containerName="ovnkube-controller" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.259717 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284171 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284226 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284264 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284308 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284331 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284350 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284377 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284404 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284430 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284462 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284485 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284530 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284554 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284584 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284607 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284625 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284644 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284671 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284719 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") pod \"047ce610-09fa-482b-8d29-45ad376d12b3\" (UID: \"047ce610-09fa-482b-8d29-45ad376d12b3\") " Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.284991 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285026 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285046 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.285066 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287309 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log" (OuterVolumeSpecName: "node-log") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287346 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket" (OuterVolumeSpecName: "log-socket") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287744 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287795 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287916 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287969 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288007 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288017 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288034 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288062 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash" (OuterVolumeSpecName: "host-slash") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.287987 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.288271 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.290554 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.291985 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm" (OuterVolumeSpecName: "kube-api-access-rxcsm") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "kube-api-access-rxcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.292432 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.314489 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "047ce610-09fa-482b-8d29-45ad376d12b3" (UID: "047ce610-09fa-482b-8d29-45ad376d12b3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.386920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387066 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387120 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387150 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387185 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387282 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387376 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387413 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387433 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387458 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387549 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387623 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387683 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387803 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387864 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387905 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.387949 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388055 4763 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/047ce610-09fa-482b-8d29-45ad376d12b3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388077 4763 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388091 4763 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388103 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388114 4763 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388126 4763 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388139 4763 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388152 4763 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388166 4763 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388178 4763 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388190 4763 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/047ce610-09fa-482b-8d29-45ad376d12b3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388205 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxcsm\" (UniqueName: \"kubernetes.io/projected/047ce610-09fa-482b-8d29-45ad376d12b3-kube-api-access-rxcsm\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388219 4763 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388231 4763 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388242 4763 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388254 4763 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388266 4763 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388280 4763 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388292 4763 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.388305 4763 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/047ce610-09fa-482b-8d29-45ad376d12b3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489387 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489438 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489465 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-etc-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489473 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489500 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-ovn\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489517 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489534 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489543 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489562 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489580 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489636 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-var-lib-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489667 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-netd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489685 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-kubelet\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489721 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-cni-bin\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489732 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489879 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489946 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489980 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490018 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490052 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490082 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490178 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490213 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.489743 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-slash\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490343 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-openvswitch\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490392 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-run-systemd\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490432 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-systemd-units\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490471 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-node-log\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490591 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-script-lib\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490644 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-log-socket\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.490669 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d24141e3-c7ee-4d60-ac74-d439fc532720-host-run-netns\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.491391 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-env-overrides\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.491462 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d24141e3-c7ee-4d60-ac74-d439fc532720-ovnkube-config\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.494666 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d24141e3-c7ee-4d60-ac74-d439fc532720-ovn-node-metrics-cert\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.509308 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pszp\" (UniqueName: \"kubernetes.io/projected/d24141e3-c7ee-4d60-ac74-d439fc532720-kube-api-access-2pszp\") pod \"ovnkube-node-mjppx\" (UID: \"d24141e3-c7ee-4d60-ac74-d439fc532720\") " pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: I0131 15:05:39.572810 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:39 crc kubenswrapper[4763]: W0131 15:05:39.595328 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24141e3_c7ee_4d60_ac74_d439fc532720.slice/crio-7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210 WatchSource:0}: Error finding container 7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210: Status 404 returned error can't find the container with id 7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210 Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.025355 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-acl-logging/0.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.025931 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtknf_047ce610-09fa-482b-8d29-45ad376d12b3/ovn-controller/0.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026421 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" event={"ID":"047ce610-09fa-482b-8d29-45ad376d12b3","Type":"ContainerDied","Data":"be99d205ca17bdd3dd3cbb28a994ab179c5742dad18ab9a3579a8c9f686ccdc3"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026498 4763 scope.go:117] "RemoveContainer" containerID="0dbc532ebe28b0235c423161c9ad89a344c1f544a333aeb218dae16072e95df9" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.026444 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtknf" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028176 4763 generic.go:334] "Generic (PLEG): container finished" podID="d24141e3-c7ee-4d60-ac74-d439fc532720" containerID="08f4a5d8ad494132e7532f03956285ddad16e65672eccccc10e297e1724de243" exitCode=0 Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028226 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerDied","Data":"08f4a5d8ad494132e7532f03956285ddad16e65672eccccc10e297e1724de243"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.028338 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"7d826b376e1643b32f90e1390a5cc55300ae3d6a9a8dabbfc750dfffbf14f210"} Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.030581 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.044666 4763 scope.go:117] "RemoveContainer" containerID="3acf41ec0a1039c49e16b31ba5a2462b43cae103b2d43aefcdfb9105d2190d5b" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.062114 4763 scope.go:117] "RemoveContainer" containerID="92446cc63118db5cd3a2921f886a063ee74e74f803785e48c074beea107a0c7e" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.113226 4763 scope.go:117] "RemoveContainer" containerID="67e311da328fdd8430b6e62118d352c25acaa08e0452aa875fee5949391b0294" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.115767 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.125468 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtknf"] Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.132710 4763 scope.go:117] "RemoveContainer" containerID="2f881f67c065b8f75b42ca73e81b95ac1089a1f3392be160fbc30161d6051a1d" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.157079 4763 scope.go:117] "RemoveContainer" containerID="897ce6f2ad4648cde8cabbd9448505fa824eabd2c0ddb04f69dbff693033b30b" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.178793 4763 scope.go:117] "RemoveContainer" containerID="c5e64e5afef5b1fe0f61e2fcb4f1af37e2a99bb20f8390ac17348eb418408453" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.197340 4763 scope.go:117] "RemoveContainer" containerID="b5e2a7f5699174ac1253c88bd8f2819b3142de06b933eea71f02da39b0ea7cc8" Jan 31 15:05:40 crc kubenswrapper[4763]: I0131 15:05:40.215886 4763 scope.go:117] "RemoveContainer" containerID="1ae14840012cc0ff97a01cc418d3d06775cd91e89adf65fe566b2d43baa06549" Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.052800 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="047ce610-09fa-482b-8d29-45ad376d12b3" path="/var/lib/kubelet/pods/047ce610-09fa-482b-8d29-45ad376d12b3/volumes" Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.056616 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"9c2add3a8a3fbcef344b9a19a3aa64d09e2b7eb60dc559075aa9f54de70cd752"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.056873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"de9ecaabae2e3435689dbd9017f7b542fa61df999165b29765d4ef328e2dd914"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.057662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"98caaa33cfdada43a1933595b463cfec1a3e0b3010fd0c27456b3a1430369489"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058601 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"cd951a815a6dc616248fc284c8df27c4bec4f0c58b1817a630af68c6da69e08b"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058652 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"9c2549a254ed45b1ca19c7a204cf50f64f0dc072fd9e4419b42531201b549086"} Jan 31 15:05:41 crc kubenswrapper[4763]: I0131 15:05:41.058739 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"b700b43ddb7611ad546584c4fa3cc9cbbd4d4124924b74cccc22aa5956294e50"} Jan 31 15:05:44 crc kubenswrapper[4763]: I0131 15:05:44.073828 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"fd87e806f75c0972b66a831419b5bba5bf5774245c847d1bf0d63622debbb397"} Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.090988 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" event={"ID":"d24141e3-c7ee-4d60-ac74-d439fc532720","Type":"ContainerStarted","Data":"99bf5316311a8a521366a4c89ffae9ea8446b7b412cd5c8c72503407df0c0b6d"} Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.091299 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.122668 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:46 crc kubenswrapper[4763]: I0131 15:05:46.125121 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" podStartSLOduration=7.125105742 podStartE2EDuration="7.125105742s" podCreationTimestamp="2026-01-31 15:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:46.124502566 +0000 UTC m=+665.879240859" watchObservedRunningTime="2026-01-31 15:05:46.125105742 +0000 UTC m=+665.879844035" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.099495 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.099817 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:47 crc kubenswrapper[4763]: I0131 15:05:47.145321 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:05:52 crc kubenswrapper[4763]: I0131 15:05:52.041665 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:05:52 crc kubenswrapper[4763]: E0131 15:05:52.042439 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qzkhg_openshift-multus(2335d04f-10b2-4cf8-aae6-236650539c74)\"" pod="openshift-multus/multus-qzkhg" podUID="2335d04f-10b2-4cf8-aae6-236650539c74" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.857465 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.858932 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.861440 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 15:06:03 crc kubenswrapper[4763]: I0131 15:06:03.882407 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.041837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.041956 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.042114 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144163 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144241 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.144361 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.145057 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.145103 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.179145 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: I0131 15:06:04.189287 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230126 4763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230337 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230494 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:04 crc kubenswrapper[4763]: E0131 15:06:04.230737 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(505c2f4dc04f7e5a02fe7bec0da92416ba7ef5214523cc0ba45df18284bbebe1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" Jan 31 15:06:05 crc kubenswrapper[4763]: I0131 15:06:05.225916 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: I0131 15:06:05.228134 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272441 4763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272597 4763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272673 4763 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:05 crc kubenswrapper[4763]: E0131 15:06:05.272797 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace(0f29e959-ca5d-4407-ac1d-4ce7001597aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_openshift-marketplace_0f29e959-ca5d-4407-ac1d-4ce7001597aa_0(3f8b5b2abb629a09a12b9b076453a0d145f1440d83e9a15ad0fd9754bf35af68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.042101 4763 scope.go:117] "RemoveContainer" containerID="2769dbbb45eb5d98d9a4121f2a136f097f2dd1032e3c1238029f201a1307a3a6" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.233232 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qzkhg_2335d04f-10b2-4cf8-aae6-236650539c74/kube-multus/2.log" Jan 31 15:06:06 crc kubenswrapper[4763]: I0131 15:06:06.233631 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qzkhg" event={"ID":"2335d04f-10b2-4cf8-aae6-236650539c74","Type":"ContainerStarted","Data":"dfc6f9a262aaf2bc365c77362441daa01ca565a39142be499c9cfc6db48f9cf3"} Jan 31 15:06:09 crc kubenswrapper[4763]: I0131 15:06:09.612370 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mjppx" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.042217 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.043751 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:19 crc kubenswrapper[4763]: I0131 15:06:19.509452 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9"] Jan 31 15:06:19 crc kubenswrapper[4763]: W0131 15:06:19.520188 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f29e959_ca5d_4407_ac1d_4ce7001597aa.slice/crio-766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab WatchSource:0}: Error finding container 766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab: Status 404 returned error can't find the container with id 766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327377 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="da83aa006c3b39539aeb81a1f57720f82d7a61cee890999fbb01f1ff6988e938" exitCode=0 Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327830 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"da83aa006c3b39539aeb81a1f57720f82d7a61cee890999fbb01f1ff6988e938"} Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.327873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerStarted","Data":"766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab"} Jan 31 15:06:20 crc kubenswrapper[4763]: I0131 15:06:20.331177 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:06:22 crc kubenswrapper[4763]: I0131 15:06:22.347464 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="c23ff5eda4c8dfaee8eef876aae6d41938f9119db97f424a68199a88bc1d1de3" exitCode=0 Jan 31 15:06:22 crc kubenswrapper[4763]: I0131 15:06:22.347602 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"c23ff5eda4c8dfaee8eef876aae6d41938f9119db97f424a68199a88bc1d1de3"} Jan 31 15:06:23 crc kubenswrapper[4763]: I0131 15:06:23.358370 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerID="af0cb0c8d9e6478983d739d3cee2c3f17435f2214d8bb6285918c1e7c543c836" exitCode=0 Jan 31 15:06:23 crc kubenswrapper[4763]: I0131 15:06:23.358442 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"af0cb0c8d9e6478983d739d3cee2c3f17435f2214d8bb6285918c1e7c543c836"} Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.621913 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726507 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.726557 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") pod \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\" (UID: \"0f29e959-ca5d-4407-ac1d-4ce7001597aa\") " Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.728070 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle" (OuterVolumeSpecName: "bundle") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.735603 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx" (OuterVolumeSpecName: "kube-api-access-ljvjx") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "kube-api-access-ljvjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.749601 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util" (OuterVolumeSpecName: "util") pod "0f29e959-ca5d-4407-ac1d-4ce7001597aa" (UID: "0f29e959-ca5d-4407-ac1d-4ce7001597aa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828138 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljvjx\" (UniqueName: \"kubernetes.io/projected/0f29e959-ca5d-4407-ac1d-4ce7001597aa-kube-api-access-ljvjx\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828182 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:24 crc kubenswrapper[4763]: I0131 15:06:24.828201 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0f29e959-ca5d-4407-ac1d-4ce7001597aa-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" event={"ID":"0f29e959-ca5d-4407-ac1d-4ce7001597aa","Type":"ContainerDied","Data":"766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab"} Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374371 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766e3d11bab8a63b7a7560ca252c7dabe92e4c8ca5a7e51b720c4aca0e1b57ab" Jan 31 15:06:25 crc kubenswrapper[4763]: I0131 15:06:25.374381 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.158531 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159309 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159348 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="pull" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159356 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="pull" Jan 31 15:06:37 crc kubenswrapper[4763]: E0131 15:06:37.159367 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="util" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159379 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="util" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.159495 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f29e959-ca5d-4407-ac1d-4ce7001597aa" containerName="extract" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.160030 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.162747 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.163343 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-dw2bh" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171733 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171898 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.171735 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.192248 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324296 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324350 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.324390 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.390151 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.390894 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.392370 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.393688 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.393922 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bkfwf" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.412647 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.425849 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.431298 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-apiservice-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.431943 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a42a356-dc67-417c-b291-c079e880aa79-webhook-cert\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.444384 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgjc\" (UniqueName: \"kubernetes.io/projected/5a42a356-dc67-417c-b291-c079e880aa79-kube-api-access-rfgjc\") pod \"metallb-operator-controller-manager-64b6b97b4f-gbf25\" (UID: \"5a42a356-dc67-417c-b291-c079e880aa79\") " pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.477289 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527407 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527464 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.527499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629111 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.629178 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.634478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-apiservice-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.635954 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/911c2e7f-03a5-49a2-8db7-5c63c602ef29-webhook-cert\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.645885 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zvs\" (UniqueName: \"kubernetes.io/projected/911c2e7f-03a5-49a2-8db7-5c63c602ef29-kube-api-access-g7zvs\") pod \"metallb-operator-webhook-server-6448f7d6f6-k9gcj\" (UID: \"911c2e7f-03a5-49a2-8db7-5c63c602ef29\") " pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.702267 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.896348 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj"] Jan 31 15:06:37 crc kubenswrapper[4763]: W0131 15:06:37.903808 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod911c2e7f_03a5_49a2_8db7_5c63c602ef29.slice/crio-ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6 WatchSource:0}: Error finding container ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6: Status 404 returned error can't find the container with id ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6 Jan 31 15:06:37 crc kubenswrapper[4763]: I0131 15:06:37.945058 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25"] Jan 31 15:06:38 crc kubenswrapper[4763]: I0131 15:06:38.446253 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" event={"ID":"911c2e7f-03a5-49a2-8db7-5c63c602ef29","Type":"ContainerStarted","Data":"ed16c24acd643b1d907f56137191fb7b7054e8ac3b529547345eb9610ee7faf6"} Jan 31 15:06:38 crc kubenswrapper[4763]: I0131 15:06:38.447421 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" event={"ID":"5a42a356-dc67-417c-b291-c079e880aa79","Type":"ContainerStarted","Data":"49d472a5350891e0e5a1a129c47b3d26a95753531d92f824e0e0974756f8421d"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.487352 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" event={"ID":"5a42a356-dc67-417c-b291-c079e880aa79","Type":"ContainerStarted","Data":"750b34df214c48096aa6d17e78d662f78f824d835bcbd13e98c41c320c2d6db8"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.487916 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.496238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" event={"ID":"911c2e7f-03a5-49a2-8db7-5c63c602ef29","Type":"ContainerStarted","Data":"31884fcc5659db72f3dfc678e7113f4032b571741d095651212dde512160e971"} Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.496636 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.518185 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" podStartSLOduration=1.325176317 podStartE2EDuration="5.518165563s" podCreationTimestamp="2026-01-31 15:06:37 +0000 UTC" firstStartedPulling="2026-01-31 15:06:37.965090833 +0000 UTC m=+717.719829126" lastFinishedPulling="2026-01-31 15:06:42.158080069 +0000 UTC m=+721.912818372" observedRunningTime="2026-01-31 15:06:42.515840992 +0000 UTC m=+722.270579275" watchObservedRunningTime="2026-01-31 15:06:42.518165563 +0000 UTC m=+722.272903856" Jan 31 15:06:42 crc kubenswrapper[4763]: I0131 15:06:42.553565 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" podStartSLOduration=1.290207969 podStartE2EDuration="5.553541501s" podCreationTimestamp="2026-01-31 15:06:37 +0000 UTC" firstStartedPulling="2026-01-31 15:06:37.909831823 +0000 UTC m=+717.664570116" lastFinishedPulling="2026-01-31 15:06:42.173165315 +0000 UTC m=+721.927903648" observedRunningTime="2026-01-31 15:06:42.539059131 +0000 UTC m=+722.293797424" watchObservedRunningTime="2026-01-31 15:06:42.553541501 +0000 UTC m=+722.308279794" Jan 31 15:06:44 crc kubenswrapper[4763]: I0131 15:06:44.176829 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:06:44 crc kubenswrapper[4763]: I0131 15:06:44.176902 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:06:57 crc kubenswrapper[4763]: I0131 15:06:57.709493 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6448f7d6f6-k9gcj" Jan 31 15:07:12 crc kubenswrapper[4763]: I0131 15:07:12.167422 4763 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 15:07:14 crc kubenswrapper[4763]: I0131 15:07:14.176971 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:07:14 crc kubenswrapper[4763]: I0131 15:07:14.177042 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:07:17 crc kubenswrapper[4763]: I0131 15:07:17.481423 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64b6b97b4f-gbf25" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.187216 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.188142 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.190932 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.191019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7bbkb" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.191945 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ft4k2"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.207886 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.212099 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.212447 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.250835 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.283638 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kf27r"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.285616 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289679 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rvj58" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.289879 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.290041 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.291046 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.292137 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.302001 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.318298 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.387995 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388061 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388131 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388353 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388398 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388426 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388453 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388595 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388643 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388668 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388721 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.388790 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489898 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489921 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489947 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489970 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.489991 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490009 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490030 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490047 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490067 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490084 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.490441 4763 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490465 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.490493 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist podName:8fe7a08d-0d51-422f-9477-932841b77158 nodeName:}" failed. No retries permitted until 2026-01-31 15:07:18.990476417 +0000 UTC m=+758.745214700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist") pod "speaker-kf27r" (UID: "8fe7a08d-0d51-422f-9477-932841b77158") : secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490553 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490638 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8fe7a08d-0d51-422f-9477-932841b77158-metallb-excludel2\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-conf\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490641 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-reloader\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-sockets\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.490986 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.491305 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35cf5cc4-3973-4d1c-b52a-804293bb1f25-frr-startup\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.510294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-metrics-certs\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.510342 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35cf5cc4-3973-4d1c-b52a-804293bb1f25-metrics-certs\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.512303 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.516421 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngng\" (UniqueName: \"kubernetes.io/projected/d9c89dc4-758c-449e-bd6c-76f27ee6ecec-kube-api-access-hngng\") pod \"frr-k8s-webhook-server-7df86c4f6c-wwdkt\" (UID: \"d9c89dc4-758c-449e-bd6c-76f27ee6ecec\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.517979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.526768 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5gt\" (UniqueName: \"kubernetes.io/projected/8fe7a08d-0d51-422f-9477-932841b77158-kube-api-access-qx5gt\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.528245 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcss\" (UniqueName: \"kubernetes.io/projected/35cf5cc4-3973-4d1c-b52a-804293bb1f25-kube-api-access-vqcss\") pod \"frr-k8s-ft4k2\" (UID: \"35cf5cc4-3973-4d1c-b52a-804293bb1f25\") " pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.530359 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.591752 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.592028 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.592073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.593618 4763 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.596975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-metrics-certs\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.606817 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30f91c96-0c0b-4426-986d-715d11a222b3-cert\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.607990 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpp74\" (UniqueName: \"kubernetes.io/projected/30f91c96-0c0b-4426-986d-715d11a222b3-kube-api-access-zpp74\") pod \"controller-6968d8fdc4-f4wjv\" (UID: \"30f91c96-0c0b-4426-986d-715d11a222b3\") " pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.633634 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.712272 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"1fc4a1ccb90fd122f17308fbd30e5fe162996920b0a4145c865eeec8dd6a52ba"} Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.728621 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt"] Jan 31 15:07:18 crc kubenswrapper[4763]: W0131 15:07:18.729609 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c89dc4_758c_449e_bd6c_76f27ee6ecec.slice/crio-a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4 WatchSource:0}: Error finding container a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4: Status 404 returned error can't find the container with id a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4 Jan 31 15:07:18 crc kubenswrapper[4763]: I0131 15:07:18.996044 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.996284 4763 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 15:07:18 crc kubenswrapper[4763]: E0131 15:07:18.996555 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist podName:8fe7a08d-0d51-422f-9477-932841b77158 nodeName:}" failed. No retries permitted until 2026-01-31 15:07:19.996532874 +0000 UTC m=+759.751271167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist") pod "speaker-kf27r" (UID: "8fe7a08d-0d51-422f-9477-932841b77158") : secret "metallb-memberlist" not found Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.090196 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-f4wjv"] Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.719827 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" event={"ID":"d9c89dc4-758c-449e-bd6c-76f27ee6ecec","Type":"ContainerStarted","Data":"a7dcb0cf0509a71aab5386e80f158438c3aa80945bd5e0ec64b7b965c69cf6c4"} Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.721423 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"429257d4d504696214848305eccff0b19f84735a0bf3fb7e4d1dc571cab4bb9b"} Jan 31 15:07:19 crc kubenswrapper[4763]: I0131 15:07:19.721456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"4387dbf4296daf6096866543a418fd4476afe84abfa4652518b2fb13e730235e"} Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.010509 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.018946 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8fe7a08d-0d51-422f-9477-932841b77158-memberlist\") pod \"speaker-kf27r\" (UID: \"8fe7a08d-0d51-422f-9477-932841b77158\") " pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.109905 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kf27r" Jan 31 15:07:20 crc kubenswrapper[4763]: W0131 15:07:20.139733 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fe7a08d_0d51_422f_9477_932841b77158.slice/crio-6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a WatchSource:0}: Error finding container 6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a: Status 404 returned error can't find the container with id 6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.736659 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"eb84555a6349b60fe3eff2843d08c010fead720b299e1f5d2fd517c6630e3506"} Jan 31 15:07:20 crc kubenswrapper[4763]: I0131 15:07:20.736720 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"6e6f196411dd18b9d94a71fc99eafdda65f1d46718d50ffa2e0f5c4482e73d5a"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.763170 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kf27r" event={"ID":"8fe7a08d-0d51-422f-9477-932841b77158","Type":"ContainerStarted","Data":"1ed73e23ba2e08700d98ae7cab6b4f22bb3aab76fc101eb97d61fd68ca5e1593"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.763521 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kf27r" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.771825 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-f4wjv" event={"ID":"30f91c96-0c0b-4426-986d-715d11a222b3","Type":"ContainerStarted","Data":"b37fc7cc42df4d8a0467c4decd4f3a29ff46e36f924e3f0187ff923d4ab40fe6"} Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.772303 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.793301 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kf27r" podStartSLOduration=3.428164625 podStartE2EDuration="5.793284039s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:20.441741037 +0000 UTC m=+760.196479330" lastFinishedPulling="2026-01-31 15:07:22.806860451 +0000 UTC m=+762.561598744" observedRunningTime="2026-01-31 15:07:23.780606396 +0000 UTC m=+763.535344689" watchObservedRunningTime="2026-01-31 15:07:23.793284039 +0000 UTC m=+763.548022332" Jan 31 15:07:23 crc kubenswrapper[4763]: I0131 15:07:23.802730 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-f4wjv" podStartSLOduration=2.253876254 podStartE2EDuration="5.802709667s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:19.253643813 +0000 UTC m=+759.008382166" lastFinishedPulling="2026-01-31 15:07:22.802477286 +0000 UTC m=+762.557215579" observedRunningTime="2026-01-31 15:07:23.798205679 +0000 UTC m=+763.552943972" watchObservedRunningTime="2026-01-31 15:07:23.802709667 +0000 UTC m=+763.557447960" Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.791211 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" event={"ID":"d9c89dc4-758c-449e-bd6c-76f27ee6ecec","Type":"ContainerStarted","Data":"4a8a8200947df19a17347c7e1c8765bc6f84296a0bac8799586287e283401f69"} Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793306 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793388 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"af8c4f20e5951fe51fc462b54681a3afa8b058c208b40e8392d3b94efb4c16c7"} Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.793240 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="af8c4f20e5951fe51fc462b54681a3afa8b058c208b40e8392d3b94efb4c16c7" exitCode=0 Jan 31 15:07:26 crc kubenswrapper[4763]: I0131 15:07:26.813047 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" podStartSLOduration=1.563502169 podStartE2EDuration="8.813023041s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:18.732219274 +0000 UTC m=+758.486957567" lastFinishedPulling="2026-01-31 15:07:25.981740136 +0000 UTC m=+765.736478439" observedRunningTime="2026-01-31 15:07:26.810476464 +0000 UTC m=+766.565214767" watchObservedRunningTime="2026-01-31 15:07:26.813023041 +0000 UTC m=+766.567761374" Jan 31 15:07:27 crc kubenswrapper[4763]: I0131 15:07:27.803281 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="dc353e334658bc458deb25dfc745ce2859bb37591b50a71d564ddf83482d5107" exitCode=0 Jan 31 15:07:27 crc kubenswrapper[4763]: I0131 15:07:27.803350 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"dc353e334658bc458deb25dfc745ce2859bb37591b50a71d564ddf83482d5107"} Jan 31 15:07:28 crc kubenswrapper[4763]: I0131 15:07:28.811353 4763 generic.go:334] "Generic (PLEG): container finished" podID="35cf5cc4-3973-4d1c-b52a-804293bb1f25" containerID="d27813fd97b27ee9cc877bc981f7a94d4b28da8825219ac87839bda047d52cab" exitCode=0 Jan 31 15:07:28 crc kubenswrapper[4763]: I0131 15:07:28.811461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerDied","Data":"d27813fd97b27ee9cc877bc981f7a94d4b28da8825219ac87839bda047d52cab"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.823885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9aa29f971c84fd7f058aa5e42fa186b3dd76a5e4ca3e96f18afa51b6a05653c3"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824313 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"3f4a4120ce9e745aa4eb17c5b98801f9e7d9ed260f86e066afc039a2df74ccc9"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824341 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824357 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"56cd6bcb47e0aaa8d975742e12f3793b3f472becdfe6e50127eedad161392879"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824371 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9f6f26276ab742cfdbf9c215ea8a998de0cec3d5aee011bca35bf58bf72e0370"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824387 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"9af1e51eef2e95d7801c8694ec506723b2705fd111ca099ffdbfd8a7a8ca64c2"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.824401 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ft4k2" event={"ID":"35cf5cc4-3973-4d1c-b52a-804293bb1f25","Type":"ContainerStarted","Data":"3cb31b4fd6b84e8ce9d27b65fbda97162e964453d3788871e724d9f68e6f33d2"} Jan 31 15:07:29 crc kubenswrapper[4763]: I0131 15:07:29.849241 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ft4k2" podStartSLOduration=4.606533822 podStartE2EDuration="11.849226905s" podCreationTimestamp="2026-01-31 15:07:18 +0000 UTC" firstStartedPulling="2026-01-31 15:07:18.689453961 +0000 UTC m=+758.444192254" lastFinishedPulling="2026-01-31 15:07:25.932147004 +0000 UTC m=+765.686885337" observedRunningTime="2026-01-31 15:07:29.847031397 +0000 UTC m=+769.601769700" watchObservedRunningTime="2026-01-31 15:07:29.849226905 +0000 UTC m=+769.603965198" Jan 31 15:07:30 crc kubenswrapper[4763]: I0131 15:07:30.117221 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kf27r" Jan 31 15:07:33 crc kubenswrapper[4763]: I0131 15:07:33.531358 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:33 crc kubenswrapper[4763]: I0131 15:07:33.594451 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.014506 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.016007 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.025100 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.025503 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.026250 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.027922 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-gf28d" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.037519 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.138515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.157076 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"mariadb-operator-index-ttnjc\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.339579 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.796283 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:36 crc kubenswrapper[4763]: W0131 15:07:36.798196 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6750409_e191_47cd_8abe_bf763a980ed5.slice/crio-55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30 WatchSource:0}: Error finding container 55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30: Status 404 returned error can't find the container with id 55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30 Jan 31 15:07:36 crc kubenswrapper[4763]: I0131 15:07:36.873159 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerStarted","Data":"55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30"} Jan 31 15:07:37 crc kubenswrapper[4763]: I0131 15:07:37.880776 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerStarted","Data":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} Jan 31 15:07:37 crc kubenswrapper[4763]: I0131 15:07:37.900033 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-ttnjc" podStartSLOduration=2.104305513 podStartE2EDuration="2.900015244s" podCreationTimestamp="2026-01-31 15:07:35 +0000 UTC" firstStartedPulling="2026-01-31 15:07:36.800835776 +0000 UTC m=+776.555574079" lastFinishedPulling="2026-01-31 15:07:37.596545497 +0000 UTC m=+777.351283810" observedRunningTime="2026-01-31 15:07:37.897859147 +0000 UTC m=+777.652597520" watchObservedRunningTime="2026-01-31 15:07:37.900015244 +0000 UTC m=+777.654753537" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.523081 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wwdkt" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.535854 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ft4k2" Jan 31 15:07:38 crc kubenswrapper[4763]: I0131 15:07:38.637875 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-f4wjv" Jan 31 15:07:39 crc kubenswrapper[4763]: I0131 15:07:39.398528 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:39 crc kubenswrapper[4763]: I0131 15:07:39.897136 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-ttnjc" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" containerID="cri-o://2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" gracePeriod=2 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.016288 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.017138 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.026318 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.096680 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.198367 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.221183 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lkhf\" (UniqueName: \"kubernetes.io/projected/29673dd0-5315-4de5-bbc4-d8deb8581b9d-kube-api-access-2lkhf\") pod \"mariadb-operator-index-d2rtv\" (UID: \"29673dd0-5315-4de5-bbc4-d8deb8581b9d\") " pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.353990 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.631199 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-d2rtv"] Jan 31 15:07:40 crc kubenswrapper[4763]: W0131 15:07:40.638609 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29673dd0_5315_4de5_bbc4_d8deb8581b9d.slice/crio-432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384 WatchSource:0}: Error finding container 432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384: Status 404 returned error can't find the container with id 432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.744904 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905246 4763 generic.go:334] "Generic (PLEG): container finished" podID="a6750409-e191-47cd-8abe-bf763a980ed5" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" exitCode=0 Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905315 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerDied","Data":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.905330 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-ttnjc" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.906023 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-ttnjc" event={"ID":"a6750409-e191-47cd-8abe-bf763a980ed5","Type":"ContainerDied","Data":"55812768ef9e9acb8f57d03f8867d4cd30a89d38fe9b2fe3f1de7204e6895a30"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.906143 4763 scope.go:117] "RemoveContainer" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.907329 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-d2rtv" event={"ID":"29673dd0-5315-4de5-bbc4-d8deb8581b9d","Type":"ContainerStarted","Data":"432972debde771139e8dc45988516562069256c9b270fd3dc6b79f35df454384"} Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.907889 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") pod \"a6750409-e191-47cd-8abe-bf763a980ed5\" (UID: \"a6750409-e191-47cd-8abe-bf763a980ed5\") " Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.915848 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd" (OuterVolumeSpecName: "kube-api-access-lj4fd") pod "a6750409-e191-47cd-8abe-bf763a980ed5" (UID: "a6750409-e191-47cd-8abe-bf763a980ed5"). InnerVolumeSpecName "kube-api-access-lj4fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.927973 4763 scope.go:117] "RemoveContainer" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: E0131 15:07:40.928368 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": container with ID starting with 2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1 not found: ID does not exist" containerID="2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1" Jan 31 15:07:40 crc kubenswrapper[4763]: I0131 15:07:40.928395 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1"} err="failed to get container status \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": rpc error: code = NotFound desc = could not find container \"2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1\": container with ID starting with 2eb386fd0219e205ad50dd211425524b310154ab635f166945c01cd0dd3b64c1 not found: ID does not exist" Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.010900 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4fd\" (UniqueName: \"kubernetes.io/projected/a6750409-e191-47cd-8abe-bf763a980ed5-kube-api-access-lj4fd\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.226465 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.229620 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-ttnjc"] Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.915441 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-d2rtv" event={"ID":"29673dd0-5315-4de5-bbc4-d8deb8581b9d","Type":"ContainerStarted","Data":"3cfc93c04ffb2a207f1737ca08129197882fbf66636f7a931f0211e2f4411773"} Jan 31 15:07:41 crc kubenswrapper[4763]: I0131 15:07:41.945737 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-d2rtv" podStartSLOduration=2.509736496 podStartE2EDuration="2.945713543s" podCreationTimestamp="2026-01-31 15:07:39 +0000 UTC" firstStartedPulling="2026-01-31 15:07:40.642790443 +0000 UTC m=+780.397528736" lastFinishedPulling="2026-01-31 15:07:41.07876749 +0000 UTC m=+780.833505783" observedRunningTime="2026-01-31 15:07:41.936607625 +0000 UTC m=+781.691345958" watchObservedRunningTime="2026-01-31 15:07:41.945713543 +0000 UTC m=+781.700451856" Jan 31 15:07:43 crc kubenswrapper[4763]: I0131 15:07:43.054761 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" path="/var/lib/kubelet/pods/a6750409-e191-47cd-8abe-bf763a980ed5/volumes" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.177892 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.178387 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.178537 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.179806 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.179977 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" gracePeriod=600 Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.945777 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" exitCode=0 Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.945838 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085"} Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.946398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} Jan 31 15:07:44 crc kubenswrapper[4763]: I0131 15:07:44.946458 4763 scope.go:117] "RemoveContainer" containerID="9efbc169d52bc3e7c2ea07dab22d5bc7a445634e3b2db84476ba82a91a9cf629" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.013097 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: E0131 15:07:50.014314 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.014346 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.014585 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6750409-e191-47cd-8abe-bf763a980ed5" containerName="registry-server" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.015993 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.047499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144359 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144847 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.144924 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246352 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246423 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246472 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.246911 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.247020 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.271323 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"redhat-marketplace-4g6dt\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.344555 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.355192 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.355258 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.400539 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.784906 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:07:50 crc kubenswrapper[4763]: W0131 15:07:50.795131 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011bc840_ca03_452f_8b2c_c3a8181b1883.slice/crio-75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01 WatchSource:0}: Error finding container 75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01: Status 404 returned error can't find the container with id 75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01 Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.996873 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" exitCode=0 Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.997301 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318"} Jan 31 15:07:50 crc kubenswrapper[4763]: I0131 15:07:50.997384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerStarted","Data":"75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01"} Jan 31 15:07:51 crc kubenswrapper[4763]: I0131 15:07:51.053169 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-d2rtv" Jan 31 15:07:52 crc kubenswrapper[4763]: I0131 15:07:52.004093 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" exitCode=0 Jan 31 15:07:52 crc kubenswrapper[4763]: I0131 15:07:52.004194 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5"} Jan 31 15:07:53 crc kubenswrapper[4763]: I0131 15:07:53.015207 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerStarted","Data":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} Jan 31 15:07:53 crc kubenswrapper[4763]: I0131 15:07:53.051791 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4g6dt" podStartSLOduration=2.6130294320000003 podStartE2EDuration="4.05176434s" podCreationTimestamp="2026-01-31 15:07:49 +0000 UTC" firstStartedPulling="2026-01-31 15:07:50.999818691 +0000 UTC m=+790.754557024" lastFinishedPulling="2026-01-31 15:07:52.438553599 +0000 UTC m=+792.193291932" observedRunningTime="2026-01-31 15:07:53.042905758 +0000 UTC m=+792.797644091" watchObservedRunningTime="2026-01-31 15:07:53.05176434 +0000 UTC m=+792.806502673" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.880004 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.882800 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.885894 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.888471 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978088 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978205 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:58 crc kubenswrapper[4763]: I0131 15:07:58.978357 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080401 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080611 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.080731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.082135 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.082179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.108299 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.203823 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:07:59 crc kubenswrapper[4763]: I0131 15:07:59.676481 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s"] Jan 31 15:07:59 crc kubenswrapper[4763]: W0131 15:07:59.684816 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50493718_9240_44a6_bb1a_4c6c97473f2d.slice/crio-3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad WatchSource:0}: Error finding container 3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad: Status 404 returned error can't find the container with id 3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079380 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="08126f8f531873412f4f33de869d99e48b5d7e549cdb48af5e3d9b963d0ca5f8" exitCode=0 Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"08126f8f531873412f4f33de869d99e48b5d7e549cdb48af5e3d9b963d0ca5f8"} Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.079875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerStarted","Data":"3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad"} Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.345163 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.345260 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:00 crc kubenswrapper[4763]: I0131 15:08:00.409966 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.088040 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade" exitCode=0 Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.088094 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade"} Jan 31 15:08:01 crc kubenswrapper[4763]: E0131 15:08:01.130403 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50493718_9240_44a6_bb1a_4c6c97473f2d.slice/crio-conmon-d759296e36ed7771bc791581a0938998910bf76fa0dfae164757b2d9c1d5aade.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:08:01 crc kubenswrapper[4763]: I0131 15:08:01.164259 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:02 crc kubenswrapper[4763]: I0131 15:08:02.103332 4763 generic.go:334] "Generic (PLEG): container finished" podID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerID="117782c4f1883d04d30b2babad15c1bb35d694737e3d0ddd44a957df63ad6994" exitCode=0 Jan 31 15:08:02 crc kubenswrapper[4763]: I0131 15:08:02.103610 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"117782c4f1883d04d30b2babad15c1bb35d694737e3d0ddd44a957df63ad6994"} Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.202347 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.203259 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4g6dt" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" containerID="cri-o://2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" gracePeriod=2 Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.541942 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643563 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643633 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.643773 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") pod \"50493718-9240-44a6-bb1a-4c6c97473f2d\" (UID: \"50493718-9240-44a6-bb1a-4c6c97473f2d\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.650870 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l" (OuterVolumeSpecName: "kube-api-access-2877l") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "kube-api-access-2877l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.653098 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle" (OuterVolumeSpecName: "bundle") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.658468 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util" (OuterVolumeSpecName: "util") pod "50493718-9240-44a6-bb1a-4c6c97473f2d" (UID: "50493718-9240-44a6-bb1a-4c6c97473f2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.673727 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.744992 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745047 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745077 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") pod \"011bc840-ca03-452f-8b2c-c3a8181b1883\" (UID: \"011bc840-ca03-452f-8b2c-c3a8181b1883\") " Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745242 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745255 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50493718-9240-44a6-bb1a-4c6c97473f2d-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.745273 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2877l\" (UniqueName: \"kubernetes.io/projected/50493718-9240-44a6-bb1a-4c6c97473f2d-kube-api-access-2877l\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.746337 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities" (OuterVolumeSpecName: "utilities") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.748365 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp" (OuterVolumeSpecName: "kube-api-access-k9lkp") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "kube-api-access-k9lkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.775404 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "011bc840-ca03-452f-8b2c-c3a8181b1883" (UID: "011bc840-ca03-452f-8b2c-c3a8181b1883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846332 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lkp\" (UniqueName: \"kubernetes.io/projected/011bc840-ca03-452f-8b2c-c3a8181b1883-kube-api-access-k9lkp\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846376 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:03 crc kubenswrapper[4763]: I0131 15:08:03.846392 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011bc840-ca03-452f-8b2c-c3a8181b1883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118023 4763 generic.go:334] "Generic (PLEG): container finished" podID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" exitCode=0 Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118067 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4g6dt" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118086 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118111 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4g6dt" event={"ID":"011bc840-ca03-452f-8b2c-c3a8181b1883","Type":"ContainerDied","Data":"75c29313c29b71da356103a384a220d160f8a9d79b190e012c6178a65b81aa01"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.118128 4763 scope.go:117] "RemoveContainer" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" event={"ID":"50493718-9240-44a6-bb1a-4c6c97473f2d","Type":"ContainerDied","Data":"3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad"} Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124332 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7d88da2611735de5486b2a7aaf488fe7772e27dcd69092990e2e041934dfad" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.124291 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.149683 4763 scope.go:117] "RemoveContainer" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.152216 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.158075 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4g6dt"] Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.183810 4763 scope.go:117] "RemoveContainer" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209163 4763 scope.go:117] "RemoveContainer" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.209636 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": container with ID starting with 2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89 not found: ID does not exist" containerID="2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209667 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89"} err="failed to get container status \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": rpc error: code = NotFound desc = could not find container \"2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89\": container with ID starting with 2c24f7b7e8f9b08d8dfba26f6f61a83e4bdc432001584ff29c5bc2937f2b9c89 not found: ID does not exist" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.209717 4763 scope.go:117] "RemoveContainer" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.210082 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": container with ID starting with 252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5 not found: ID does not exist" containerID="252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210109 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5"} err="failed to get container status \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": rpc error: code = NotFound desc = could not find container \"252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5\": container with ID starting with 252b3a8b79e6b56f6d936f158ea79c8e17ba76cd9946a063ceda22a8679ff9b5 not found: ID does not exist" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210126 4763 scope.go:117] "RemoveContainer" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: E0131 15:08:04.210352 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": container with ID starting with 2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318 not found: ID does not exist" containerID="2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318" Jan 31 15:08:04 crc kubenswrapper[4763]: I0131 15:08:04.210375 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318"} err="failed to get container status \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": rpc error: code = NotFound desc = could not find container \"2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318\": container with ID starting with 2557c8c87272deb809928ef17363c9bc7c0be1cebc439390a5e2451c131aa318 not found: ID does not exist" Jan 31 15:08:05 crc kubenswrapper[4763]: I0131 15:08:05.069067 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" path="/var/lib/kubelet/pods/011bc840-ca03-452f-8b2c-c3a8181b1883/volumes" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.576370 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577141 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="pull" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577158 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="pull" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577177 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-utilities" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577185 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-utilities" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577194 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577201 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577217 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="util" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577223 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="util" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577237 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-content" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577243 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="extract-content" Jan 31 15:08:12 crc kubenswrapper[4763]: E0131 15:08:12.577251 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577258 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577360 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="50493718-9240-44a6-bb1a-4c6c97473f2d" containerName="extract" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577377 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="011bc840-ca03-452f-8b2c-c3a8181b1883" containerName="registry-server" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.577851 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.579916 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.580333 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-kqhgp" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.581820 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.600917 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776498 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776857 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.776992 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878445 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.878479 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.883631 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-webhook-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.883745 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30bcffc2-0054-475e-af66-74b73ec95edb-apiservice-cert\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.898004 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5kn2\" (UniqueName: \"kubernetes.io/projected/30bcffc2-0054-475e-af66-74b73ec95edb-kube-api-access-w5kn2\") pod \"mariadb-operator-controller-manager-68956c85f5-mrnqc\" (UID: \"30bcffc2-0054-475e-af66-74b73ec95edb\") " pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:12 crc kubenswrapper[4763]: I0131 15:08:12.903640 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:13 crc kubenswrapper[4763]: I0131 15:08:13.083611 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc"] Jan 31 15:08:13 crc kubenswrapper[4763]: I0131 15:08:13.195330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" event={"ID":"30bcffc2-0054-475e-af66-74b73ec95edb","Type":"ContainerStarted","Data":"9f7efbda8e0a57f609653c2420d19b62db36d50793cbce07101d503a155356a3"} Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.224379 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" event={"ID":"30bcffc2-0054-475e-af66-74b73ec95edb","Type":"ContainerStarted","Data":"f17088523d5fc7e22b3cc161ca398fe9a451ef81ebcb110275568dee2b3c5dbd"} Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.224838 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:17 crc kubenswrapper[4763]: I0131 15:08:17.247727 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" podStartSLOduration=1.4888131279999999 podStartE2EDuration="5.247707347s" podCreationTimestamp="2026-01-31 15:08:12 +0000 UTC" firstStartedPulling="2026-01-31 15:08:13.089744769 +0000 UTC m=+812.844483062" lastFinishedPulling="2026-01-31 15:08:16.848638968 +0000 UTC m=+816.603377281" observedRunningTime="2026-01-31 15:08:17.243315691 +0000 UTC m=+816.998053984" watchObservedRunningTime="2026-01-31 15:08:17.247707347 +0000 UTC m=+817.002445650" Jan 31 15:08:22 crc kubenswrapper[4763]: I0131 15:08:22.909127 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-68956c85f5-mrnqc" Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.935512 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.937730 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.942030 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:25 crc kubenswrapper[4763]: I0131 15:08:25.942152 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-gb9xs" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.062785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.164354 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.200265 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxsng\" (UniqueName: \"kubernetes.io/projected/df73235a-c7ce-449c-b163-341974166624-kube-api-access-mxsng\") pod \"infra-operator-index-9vcjd\" (UID: \"df73235a-c7ce-449c-b163-341974166624\") " pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.254383 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:26 crc kubenswrapper[4763]: I0131 15:08:26.712753 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9vcjd"] Jan 31 15:08:26 crc kubenswrapper[4763]: W0131 15:08:26.717301 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf73235a_c7ce_449c_b163_341974166624.slice/crio-2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165 WatchSource:0}: Error finding container 2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165: Status 404 returned error can't find the container with id 2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165 Jan 31 15:08:27 crc kubenswrapper[4763]: I0131 15:08:27.289428 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9vcjd" event={"ID":"df73235a-c7ce-449c-b163-341974166624","Type":"ContainerStarted","Data":"2dd2e004f215046371ac43351b1bc973eb8b0334a1fb4966c1beea11626d7165"} Jan 31 15:08:28 crc kubenswrapper[4763]: I0131 15:08:28.298366 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9vcjd" event={"ID":"df73235a-c7ce-449c-b163-341974166624","Type":"ContainerStarted","Data":"ee320b2429f4044596e0f420ac4cfc8e847433b8e04df275225c7c5c65b706de"} Jan 31 15:08:28 crc kubenswrapper[4763]: I0131 15:08:28.328370 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9vcjd" podStartSLOduration=2.470410388 podStartE2EDuration="3.328336055s" podCreationTimestamp="2026-01-31 15:08:25 +0000 UTC" firstStartedPulling="2026-01-31 15:08:26.721427142 +0000 UTC m=+826.476165465" lastFinishedPulling="2026-01-31 15:08:27.579352829 +0000 UTC m=+827.334091132" observedRunningTime="2026-01-31 15:08:28.320089929 +0000 UTC m=+828.074828262" watchObservedRunningTime="2026-01-31 15:08:28.328336055 +0000 UTC m=+828.083074388" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.255550 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.256413 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.301991 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:36 crc kubenswrapper[4763]: I0131 15:08:36.400410 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-9vcjd" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.195112 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.197194 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.199936 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.210176 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336127 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336414 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.336783 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438068 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438179 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.438222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.439145 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.439485 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.472800 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.523689 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:45 crc kubenswrapper[4763]: I0131 15:08:45.784423 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb"] Jan 31 15:08:45 crc kubenswrapper[4763]: W0131 15:08:45.786281 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3636515d_8655_48d7_b0f6_54e4c6635f1c.slice/crio-e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072 WatchSource:0}: Error finding container e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072: Status 404 returned error can't find the container with id e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072 Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437634 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="ed4e2126766da63c0e7e50df98b96eda794a2808f23cd3ea8177f76b4ede5721" exitCode=0 Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437677 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"ed4e2126766da63c0e7e50df98b96eda794a2808f23cd3ea8177f76b4ede5721"} Jan 31 15:08:46 crc kubenswrapper[4763]: I0131 15:08:46.437757 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerStarted","Data":"e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072"} Jan 31 15:08:48 crc kubenswrapper[4763]: I0131 15:08:48.486012 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="fef703f3e9560fc57c77a16edc2c33c259c56a68ca6dcbdf7cb14f969e6de2c7" exitCode=0 Jan 31 15:08:48 crc kubenswrapper[4763]: I0131 15:08:48.486384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"fef703f3e9560fc57c77a16edc2c33c259c56a68ca6dcbdf7cb14f969e6de2c7"} Jan 31 15:08:49 crc kubenswrapper[4763]: I0131 15:08:49.496768 4763 generic.go:334] "Generic (PLEG): container finished" podID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerID="4a7ea84e03b7c6a30104c397d809d487a129ecdccd4b68c9144a56a44b18655c" exitCode=0 Jan 31 15:08:49 crc kubenswrapper[4763]: I0131 15:08:49.496863 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"4a7ea84e03b7c6a30104c397d809d487a129ecdccd4b68c9144a56a44b18655c"} Jan 31 15:08:50 crc kubenswrapper[4763]: I0131 15:08:50.858132 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.030964 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.031112 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.031152 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") pod \"3636515d-8655-48d7-b0f6-54e4c6635f1c\" (UID: \"3636515d-8655-48d7-b0f6-54e4c6635f1c\") " Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.036145 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle" (OuterVolumeSpecName: "bundle") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.049802 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd" (OuterVolumeSpecName: "kube-api-access-nvxcd") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "kube-api-access-nvxcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.059753 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util" (OuterVolumeSpecName: "util") pod "3636515d-8655-48d7-b0f6-54e4c6635f1c" (UID: "3636515d-8655-48d7-b0f6-54e4c6635f1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132809 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132856 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3636515d-8655-48d7-b0f6-54e4c6635f1c-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.132876 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvxcd\" (UniqueName: \"kubernetes.io/projected/3636515d-8655-48d7-b0f6-54e4c6635f1c-kube-api-access-nvxcd\") on node \"crc\" DevicePath \"\"" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521474 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" event={"ID":"3636515d-8655-48d7-b0f6-54e4c6635f1c","Type":"ContainerDied","Data":"e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072"} Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521532 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8dac8b58b5f7a60b8a575b9164ec12af0cabffd3ab35eb88d623b326f00a072" Jan 31 15:08:51 crc kubenswrapper[4763]: I0131 15:08:51.521671 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.743971 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744542 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="util" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744552 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="util" Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744564 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744571 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: E0131 15:08:54.744584 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="pull" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744590 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="pull" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.744705 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="3636515d-8655-48d7-b0f6-54e4c6635f1c" containerName="extract" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.745414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.802247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881255 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881293 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.881373 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983152 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983222 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983329 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983750 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:54 crc kubenswrapper[4763]: I0131 15:08:54.983825 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.017203 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"certified-operators-dl6p4\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.098204 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.338512 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.343487 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.358811 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.358941 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489357 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489453 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.489489 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.559303 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.559351 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"1ce5c6e1711277fcfbaf495466b9e5fe5c110ae0edd21c55bf7a8fa1d794558e"} Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.590623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591033 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591179 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.591459 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.610491 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"community-operators-xx6l2\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:55 crc kubenswrapper[4763]: I0131 15:08:55.697792 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.149598 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:08:56 crc kubenswrapper[4763]: W0131 15:08:56.158218 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1102b46b_1431_4abc_acf3_fc15238c9dec.slice/crio-df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296 WatchSource:0}: Error finding container df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296: Status 404 returned error can't find the container with id df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566190 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196" exitCode=0 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566264 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196"} Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.566580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerStarted","Data":"df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296"} Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.570022 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" exitCode=0 Jan 31 15:08:56 crc kubenswrapper[4763]: I0131 15:08:56.570071 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.576783 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" exitCode=0 Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.576935 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26"} Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.579774 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4" exitCode=0 Jan 31 15:08:57 crc kubenswrapper[4763]: I0131 15:08:57.579816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.587014 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerStarted","Data":"23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.589399 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerStarted","Data":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} Jan 31 15:08:58 crc kubenswrapper[4763]: I0131 15:08:58.602999 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xx6l2" podStartSLOduration=2.224622835 podStartE2EDuration="3.602984067s" podCreationTimestamp="2026-01-31 15:08:55 +0000 UTC" firstStartedPulling="2026-01-31 15:08:56.56786793 +0000 UTC m=+856.322606233" lastFinishedPulling="2026-01-31 15:08:57.946229172 +0000 UTC m=+857.700967465" observedRunningTime="2026-01-31 15:08:58.602152925 +0000 UTC m=+858.356891218" watchObservedRunningTime="2026-01-31 15:08:58.602984067 +0000 UTC m=+858.357722360" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.233892 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dl6p4" podStartSLOduration=6.80510965 podStartE2EDuration="9.233877542s" podCreationTimestamp="2026-01-31 15:08:54 +0000 UTC" firstStartedPulling="2026-01-31 15:08:55.560674454 +0000 UTC m=+855.315412747" lastFinishedPulling="2026-01-31 15:08:57.989442346 +0000 UTC m=+857.744180639" observedRunningTime="2026-01-31 15:08:58.623519606 +0000 UTC m=+858.378257899" watchObservedRunningTime="2026-01-31 15:09:03.233877542 +0000 UTC m=+862.988615835" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.234736 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.235392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.237273 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.237438 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pk6lc" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.258867 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419268 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419349 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.419499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520673 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520758 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.520792 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.526256 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-webhook-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.529050 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ff757490-bd0f-4140-9f70-e5ec9d26353f-apiservice-cert\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.540611 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdvh\" (UniqueName: \"kubernetes.io/projected/ff757490-bd0f-4140-9f70-e5ec9d26353f-kube-api-access-pmdvh\") pod \"infra-operator-controller-manager-5b76796566-wfzb5\" (UID: \"ff757490-bd0f-4140-9f70-e5ec9d26353f\") " pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.562378 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:03 crc kubenswrapper[4763]: I0131 15:09:03.993208 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5"] Jan 31 15:09:04 crc kubenswrapper[4763]: I0131 15:09:04.639732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" event={"ID":"ff757490-bd0f-4140-9f70-e5ec9d26353f","Type":"ContainerStarted","Data":"ccb30435700f857e3fcda34fc789beb38c3fe560d9f9d1985ed5f669ef555514"} Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.099126 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.100504 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.148181 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.698138 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.698197 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.706850 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:05 crc kubenswrapper[4763]: I0131 15:09:05.749747 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.656275 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" event={"ID":"ff757490-bd0f-4140-9f70-e5ec9d26353f","Type":"ContainerStarted","Data":"59374f00e9c06934e324014a2f2bd5c0e56a99a03ab8b33a467bbd32270380ad"} Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.682066 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" podStartSLOduration=1.588242655 podStartE2EDuration="3.682048383s" podCreationTimestamp="2026-01-31 15:09:03 +0000 UTC" firstStartedPulling="2026-01-31 15:09:04.023012603 +0000 UTC m=+863.777750936" lastFinishedPulling="2026-01-31 15:09:06.116818381 +0000 UTC m=+865.871556664" observedRunningTime="2026-01-31 15:09:06.677991666 +0000 UTC m=+866.432729999" watchObservedRunningTime="2026-01-31 15:09:06.682048383 +0000 UTC m=+866.436786676" Jan 31 15:09:06 crc kubenswrapper[4763]: I0131 15:09:06.712937 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:07 crc kubenswrapper[4763]: I0131 15:09:07.663918 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.821000 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.822565 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.826047 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.826672 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.827900 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-2b4kn" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.828102 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.830878 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.842395 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.843590 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.850120 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.852642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.860296 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.863202 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.869750 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999334 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999394 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999432 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999473 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999509 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999569 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999603 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999678 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999741 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:08 crc kubenswrapper[4763]: I0131 15:09:08.999771 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:08.999806 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:08.999879 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000024 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000113 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000207 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000246 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.000298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101337 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101379 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101396 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101433 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101463 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101482 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101537 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101560 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101584 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101601 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101622 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101645 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101679 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.101764 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.102248 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.102550 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103060 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-generated\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103257 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") device mount path \"/mnt/openstack/pv07\"" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103272 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") device mount path \"/mnt/openstack/pv12\"" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103374 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kolla-config\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103494 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103771 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-kolla-config\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.103988 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-config-data-default\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104715 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-config-data-default\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dc474c59-7d29-4ce0-86c8-07d96c462b4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.104983 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.105601 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a89037-391b-4806-8f01-09ddd6a4d13e-operator-scripts\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.124558 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.126790 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.128876 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.129684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxdxd\" (UniqueName: \"kubernetes.io/projected/dc474c59-7d29-4ce0-86c8-07d96c462b4e-kube-api-access-zxdxd\") pod \"openstack-galera-0\" (UID: \"dc474c59-7d29-4ce0-86c8-07d96c462b4e\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.134584 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwmv\" (UniqueName: \"kubernetes.io/projected/cd0d5ccb-1d59-428e-9a53-17427cd0e5dc-kube-api-access-cpwmv\") pod \"openstack-galera-2\" (UID: \"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.141844 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mfz\" (UniqueName: \"kubernetes.io/projected/e5a89037-391b-4806-8f01-09ddd6a4d13e-kube-api-access-w8mfz\") pod \"openstack-galera-1\" (UID: \"e5a89037-391b-4806-8f01-09ddd6a4d13e\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.160182 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.177975 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.186451 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.471605 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.674923 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"e7a62c9e853bb819796306cfa81c4490baddece71bc86737097c21ff5c15cb05"} Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.739267 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.758116 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 31 15:09:09 crc kubenswrapper[4763]: W0131 15:09:09.764982 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc474c59_7d29_4ce0_86c8_07d96c462b4e.slice/crio-e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e WatchSource:0}: Error finding container e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e: Status 404 returned error can't find the container with id e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.927979 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:09 crc kubenswrapper[4763]: I0131 15:09:09.928350 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dl6p4" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" containerID="cri-o://3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" gracePeriod=2 Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.346495 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420199 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420244 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.420302 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") pod \"73b1db31-195c-41e8-9ab4-6e13e96600fa\" (UID: \"73b1db31-195c-41e8-9ab4-6e13e96600fa\") " Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.421586 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities" (OuterVolumeSpecName: "utilities") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.427182 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf" (OuterVolumeSpecName: "kube-api-access-rt7xf") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "kube-api-access-rt7xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.482352 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b1db31-195c-41e8-9ab4-6e13e96600fa" (UID: "73b1db31-195c-41e8-9ab4-6e13e96600fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522021 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522060 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt7xf\" (UniqueName: \"kubernetes.io/projected/73b1db31-195c-41e8-9ab4-6e13e96600fa-kube-api-access-rt7xf\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.522072 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b1db31-195c-41e8-9ab4-6e13e96600fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.692393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"0283241ede766938c9f3cc5927a25fdc80110beb7aff3558b75f482647214a8f"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.696282 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"e7d2c892bd28ee789e5a46a9e46a211734b0849f3ef86c66c65eca95f7d8991e"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698497 4763 generic.go:334] "Generic (PLEG): container finished" podID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" exitCode=0 Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698532 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dl6p4" event={"ID":"73b1db31-195c-41e8-9ab4-6e13e96600fa","Type":"ContainerDied","Data":"1ce5c6e1711277fcfbaf495466b9e5fe5c110ae0edd21c55bf7a8fa1d794558e"} Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698574 4763 scope.go:117] "RemoveContainer" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.698695 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dl6p4" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.728439 4763 scope.go:117] "RemoveContainer" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.745547 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.753606 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dl6p4"] Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.761202 4763 scope.go:117] "RemoveContainer" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.777915 4763 scope.go:117] "RemoveContainer" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.778428 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": container with ID starting with 3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b not found: ID does not exist" containerID="3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778463 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b"} err="failed to get container status \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": rpc error: code = NotFound desc = could not find container \"3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b\": container with ID starting with 3e1577b896f75ca932834cf67f6d200f963de8f29543fd5d0da3c00a8743117b not found: ID does not exist" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778492 4763 scope.go:117] "RemoveContainer" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.778836 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": container with ID starting with 34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26 not found: ID does not exist" containerID="34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778878 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26"} err="failed to get container status \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": rpc error: code = NotFound desc = could not find container \"34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26\": container with ID starting with 34fbcf0f68f2b4bcc29cd6b0ec98665d1cb2deea5c4fd3a798efa2e8b779ef26 not found: ID does not exist" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.778903 4763 scope.go:117] "RemoveContainer" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: E0131 15:09:10.779385 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": container with ID starting with 4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce not found: ID does not exist" containerID="4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce" Jan 31 15:09:10 crc kubenswrapper[4763]: I0131 15:09:10.779421 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce"} err="failed to get container status \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": rpc error: code = NotFound desc = could not find container \"4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce\": container with ID starting with 4151039b4f49a7ff99cc6b11471e5efef757127e04cb78f85efb3f28ef0adfce not found: ID does not exist" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.064399 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" path="/var/lib/kubelet/pods/73b1db31-195c-41e8-9ab4-6e13e96600fa/volumes" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.325736 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.325994 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xx6l2" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" containerID="cri-o://23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" gracePeriod=2 Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710541 4763 generic.go:334] "Generic (PLEG): container finished" podID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerID="23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" exitCode=0 Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710625 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157"} Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xx6l2" event={"ID":"1102b46b-1431-4abc-acf3-fc15238c9dec","Type":"ContainerDied","Data":"df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296"} Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.710892 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df2ae68589cdf5f2daa2674b8c6d3f469b0de474439fd474c5f8daf2c17ae296" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.722153 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849622 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.849664 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") pod \"1102b46b-1431-4abc-acf3-fc15238c9dec\" (UID: \"1102b46b-1431-4abc-acf3-fc15238c9dec\") " Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.851590 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities" (OuterVolumeSpecName: "utilities") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.861656 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td" (OuterVolumeSpecName: "kube-api-access-fl6td") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "kube-api-access-fl6td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.895818 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1102b46b-1431-4abc-acf3-fc15238c9dec" (UID: "1102b46b-1431-4abc-acf3-fc15238c9dec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950750 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl6td\" (UniqueName: \"kubernetes.io/projected/1102b46b-1431-4abc-acf3-fc15238c9dec-kube-api-access-fl6td\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950790 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:11 crc kubenswrapper[4763]: I0131 15:09:11.950804 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1102b46b-1431-4abc-acf3-fc15238c9dec-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.717165 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xx6l2" Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.749231 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:12 crc kubenswrapper[4763]: I0131 15:09:12.752873 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xx6l2"] Jan 31 15:09:13 crc kubenswrapper[4763]: I0131 15:09:13.047891 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" path="/var/lib/kubelet/pods/1102b46b-1431-4abc-acf3-fc15238c9dec/volumes" Jan 31 15:09:13 crc kubenswrapper[4763]: I0131 15:09:13.568441 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5b76796566-wfzb5" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.338581 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339249 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339355 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339379 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339388 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339398 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339417 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339425 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339439 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339447 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="extract-content" Jan 31 15:09:19 crc kubenswrapper[4763]: E0131 15:09:19.339458 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339466 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="extract-utilities" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339570 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1db31-195c-41e8-9ab4-6e13e96600fa" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339580 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1102b46b-1431-4abc-acf3-fc15238c9dec" containerName="registry-server" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.339991 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.342268 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-l7lb8" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.350667 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.455610 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.557390 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.577576 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvww4\" (UniqueName: \"kubernetes.io/projected/6fa47f40-fce4-4e57-aebb-3313c4c996dd-kube-api-access-vvww4\") pod \"rabbitmq-cluster-operator-index-l9x4g\" (UID: \"6fa47f40-fce4-4e57-aebb-3313c4c996dd\") " pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:19 crc kubenswrapper[4763]: I0131 15:09:19.708355 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:21 crc kubenswrapper[4763]: I0131 15:09:21.756592 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-l9x4g"] Jan 31 15:09:21 crc kubenswrapper[4763]: W0131 15:09:21.764414 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fa47f40_fce4_4e57_aebb_3313c4c996dd.slice/crio-b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636 WatchSource:0}: Error finding container b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636: Status 404 returned error can't find the container with id b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636 Jan 31 15:09:21 crc kubenswrapper[4763]: I0131 15:09:21.783653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" event={"ID":"6fa47f40-fce4-4e57-aebb-3313c4c996dd","Type":"ContainerStarted","Data":"b5acb9dd6631da5850f2b0a900cab783ee79ccaaad65b3866952f7c3e37d3636"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.789892 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.792485 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f"} Jan 31 15:09:22 crc kubenswrapper[4763]: I0131 15:09:22.794134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e"} Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.750947 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.751982 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.754019 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-ltl5x" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.755065 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.774751 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836309 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836367 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.836390 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938013 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938059 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938078 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938847 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kolla-config\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.938918 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-config-data\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:24 crc kubenswrapper[4763]: I0131 15:09:24.958557 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47f5\" (UniqueName: \"kubernetes.io/projected/ecb69fa0-2df1-477e-a257-05e0f1dd1c76-kube-api-access-m47f5\") pod \"memcached-0\" (UID: \"ecb69fa0-2df1-477e-a257-05e0f1dd1c76\") " pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.111860 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.817152 4763 generic.go:334] "Generic (PLEG): container finished" podID="dc474c59-7d29-4ce0-86c8-07d96c462b4e" containerID="ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.817324 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerDied","Data":"ce02934a39037592e12671864428578327605869fa422a0b14f2952f37b4fe7f"} Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.820321 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd0d5ccb-1d59-428e-9a53-17427cd0e5dc" containerID="3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.820384 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerDied","Data":"3b7aa4bbd87ecb3b6f56a81f7b6a6eb39d741c6c8714443fd00fef83557cbf4e"} Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.828708 4763 generic.go:334] "Generic (PLEG): container finished" podID="e5a89037-391b-4806-8f01-09ddd6a4d13e" containerID="b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b" exitCode=0 Jan 31 15:09:25 crc kubenswrapper[4763]: I0131 15:09:25.828764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerDied","Data":"b85c3b69f3888f724d07fc1d22586bd6d61c9457a894e1c50be4e3612cb4f38b"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.248506 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.840822 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" event={"ID":"6fa47f40-fce4-4e57-aebb-3313c4c996dd","Type":"ContainerStarted","Data":"5a7d5b09f32d7460f5ea0260edf43a51f47ca11e4d1a7f943b18f279f6393781"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.842841 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"ecb69fa0-2df1-477e-a257-05e0f1dd1c76","Type":"ContainerStarted","Data":"a23813ee62437cd5e084ebd847143fc9fbe2b5e309aa2320771ea58c5eb7ba7c"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.844409 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"e5a89037-391b-4806-8f01-09ddd6a4d13e","Type":"ContainerStarted","Data":"e1c33ef385ad44dbf4cd4621deeafe477af76df1161ec9efe735c666b6660d89"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.846663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"dc474c59-7d29-4ce0-86c8-07d96c462b4e","Type":"ContainerStarted","Data":"e1e4b38c6e517d684a2ae323e5d2ad612b2b3909928f443ab242ac2744b4be7d"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.848922 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cd0d5ccb-1d59-428e-9a53-17427cd0e5dc","Type":"ContainerStarted","Data":"16c7b86549cc8bf6c8b6913433ebe7c9e1038b19015e1aad91f8610a8ce92baf"} Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.858824 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" podStartSLOduration=3.8290526849999997 podStartE2EDuration="7.858804486s" podCreationTimestamp="2026-01-31 15:09:19 +0000 UTC" firstStartedPulling="2026-01-31 15:09:21.774907265 +0000 UTC m=+881.529645558" lastFinishedPulling="2026-01-31 15:09:25.804659066 +0000 UTC m=+885.559397359" observedRunningTime="2026-01-31 15:09:26.854649967 +0000 UTC m=+886.609403210" watchObservedRunningTime="2026-01-31 15:09:26.858804486 +0000 UTC m=+886.613542779" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.873674 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.666174376 podStartE2EDuration="19.873657696s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.481947571 +0000 UTC m=+869.236685874" lastFinishedPulling="2026-01-31 15:09:21.689430911 +0000 UTC m=+881.444169194" observedRunningTime="2026-01-31 15:09:26.869523118 +0000 UTC m=+886.624261411" watchObservedRunningTime="2026-01-31 15:09:26.873657696 +0000 UTC m=+886.628395989" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.891820 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=7.913623523 podStartE2EDuration="19.891805693s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.767273312 +0000 UTC m=+869.522011605" lastFinishedPulling="2026-01-31 15:09:21.745455482 +0000 UTC m=+881.500193775" observedRunningTime="2026-01-31 15:09:26.888904227 +0000 UTC m=+886.643642520" watchObservedRunningTime="2026-01-31 15:09:26.891805693 +0000 UTC m=+886.646543986" Jan 31 15:09:26 crc kubenswrapper[4763]: I0131 15:09:26.914547 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=7.95658511 podStartE2EDuration="19.914532099s" podCreationTimestamp="2026-01-31 15:09:07 +0000 UTC" firstStartedPulling="2026-01-31 15:09:09.746317972 +0000 UTC m=+869.501056275" lastFinishedPulling="2026-01-31 15:09:21.704264951 +0000 UTC m=+881.459003264" observedRunningTime="2026-01-31 15:09:26.91263119 +0000 UTC m=+886.667369483" watchObservedRunningTime="2026-01-31 15:09:26.914532099 +0000 UTC m=+886.669270382" Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.863147 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"ecb69fa0-2df1-477e-a257-05e0f1dd1c76","Type":"ContainerStarted","Data":"49500059f695e13e5c01f24153da260ca6dd91529719115239e3e0e86165f34a"} Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.863434 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:28 crc kubenswrapper[4763]: I0131 15:09:28.885094 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=2.738730083 podStartE2EDuration="4.885076671s" podCreationTimestamp="2026-01-31 15:09:24 +0000 UTC" firstStartedPulling="2026-01-31 15:09:26.257240321 +0000 UTC m=+886.011978614" lastFinishedPulling="2026-01-31 15:09:28.403586909 +0000 UTC m=+888.158325202" observedRunningTime="2026-01-31 15:09:28.882911194 +0000 UTC m=+888.637649497" watchObservedRunningTime="2026-01-31 15:09:28.885076671 +0000 UTC m=+888.639814964" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.161446 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.161524 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.180909 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.181834 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.186783 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.186820 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.709253 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.709368 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:29 crc kubenswrapper[4763]: I0131 15:09:29.754031 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:30 crc kubenswrapper[4763]: I0131 15:09:30.911816 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-l9x4g" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.113056 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.409202 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:35 crc kubenswrapper[4763]: I0131 15:09:35.472732 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.828603 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.830391 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.833374 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.844044 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.984177 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:37 crc kubenswrapper[4763]: I0131 15:09:37.984320 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.085821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.085881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.086843 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.105867 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"root-account-create-update-jh8vr\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.151934 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.600529 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:09:38 crc kubenswrapper[4763]: W0131 15:09:38.606601 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod196347d8_7892_4b32_8bc2_0127439a95f0.slice/crio-97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b WatchSource:0}: Error finding container 97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b: Status 404 returned error can't find the container with id 97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.926398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerStarted","Data":"84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987"} Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.927500 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerStarted","Data":"97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b"} Jan 31 15:09:38 crc kubenswrapper[4763]: I0131 15:09:38.940429 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/root-account-create-update-jh8vr" podStartSLOduration=1.940399558 podStartE2EDuration="1.940399558s" podCreationTimestamp="2026-01-31 15:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:09:38.937265696 +0000 UTC m=+898.692003989" watchObservedRunningTime="2026-01-31 15:09:38.940399558 +0000 UTC m=+898.695137911" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.182687 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.184923 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.188321 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.222954 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229515 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229650 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.229687 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.331858 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332026 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332071 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332370 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.332861 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.362937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.520617 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:39 crc kubenswrapper[4763]: I0131 15:09:39.935949 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr"] Jan 31 15:09:39 crc kubenswrapper[4763]: W0131 15:09:39.952040 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6835994_86f5_4950_b010_780530fceffe.slice/crio-e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96 WatchSource:0}: Error finding container e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96: Status 404 returned error can't find the container with id e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96 Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942115 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="b2086f35b41938e43e63cfec8e02d1afade16cf2d587d3c3dd70dc3e888f4512" exitCode=0 Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"b2086f35b41938e43e63cfec8e02d1afade16cf2d587d3c3dd70dc3e888f4512"} Jan 31 15:09:40 crc kubenswrapper[4763]: I0131 15:09:40.942254 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerStarted","Data":"e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96"} Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.948107 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="f6ff0f7e969478dc2260b302e7b19458fa0b15da6240959827dafc4920332836" exitCode=0 Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.948207 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"f6ff0f7e969478dc2260b302e7b19458fa0b15da6240959827dafc4920332836"} Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.958770 4763 generic.go:334] "Generic (PLEG): container finished" podID="196347d8-7892-4b32-8bc2-0127439a95f0" containerID="84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987" exitCode=0 Jan 31 15:09:41 crc kubenswrapper[4763]: I0131 15:09:41.958831 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerDied","Data":"84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987"} Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.177768 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.178409 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.643203 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738394 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:44 crc kubenswrapper[4763]: E0131 15:09:44.738630 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738641 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.738790 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" containerName="mariadb-account-create-update" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.739714 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.749160 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.807135 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") pod \"196347d8-7892-4b32-8bc2-0127439a95f0\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.807259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") pod \"196347d8-7892-4b32-8bc2-0127439a95f0\" (UID: \"196347d8-7892-4b32-8bc2-0127439a95f0\") " Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.808005 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "196347d8-7892-4b32-8bc2-0127439a95f0" (UID: "196347d8-7892-4b32-8bc2-0127439a95f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.813936 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8" (OuterVolumeSpecName: "kube-api-access-8kwc8") pod "196347d8-7892-4b32-8bc2-0127439a95f0" (UID: "196347d8-7892-4b32-8bc2-0127439a95f0"). InnerVolumeSpecName "kube-api-access-8kwc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909054 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909127 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909169 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909236 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kwc8\" (UniqueName: \"kubernetes.io/projected/196347d8-7892-4b32-8bc2-0127439a95f0-kube-api-access-8kwc8\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.909247 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/196347d8-7892-4b32-8bc2-0127439a95f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-jh8vr" event={"ID":"196347d8-7892-4b32-8bc2-0127439a95f0","Type":"ContainerDied","Data":"97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b"} Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980463 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e182eb74bedfe8eaf315c1f144ec59ed2098d566229b1fb8133b578dde591b" Jan 31 15:09:44 crc kubenswrapper[4763]: I0131 15:09:44.980482 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-jh8vr" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010044 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010167 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.010670 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.031902 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"redhat-operators-7lf9s\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.057156 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.535546 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:09:45 crc kubenswrapper[4763]: W0131 15:09:45.539248 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92768ea9_03ef_4d26_8cdf_dfc9f45575be.slice/crio-cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944 WatchSource:0}: Error finding container cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944: Status 404 returned error can't find the container with id cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.986938 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0" exitCode=0 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.987095 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0"} Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.987218 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944"} Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.989773 4763 generic.go:334] "Generic (PLEG): container finished" podID="b6835994-86f5-4950-b010-780530fceffe" containerID="7ace12af8c0e8ac2c2939cceefc9553793b2092628e79e5cd470deb7ce570d8c" exitCode=0 Jan 31 15:09:45 crc kubenswrapper[4763]: I0131 15:09:45.989800 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"7ace12af8c0e8ac2c2939cceefc9553793b2092628e79e5cd470deb7ce570d8c"} Jan 31 15:09:46 crc kubenswrapper[4763]: I0131 15:09:46.997719 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967"} Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.438254 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.542595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.542968 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.543031 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") pod \"b6835994-86f5-4950-b010-780530fceffe\" (UID: \"b6835994-86f5-4950-b010-780530fceffe\") " Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.543632 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle" (OuterVolumeSpecName: "bundle") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.551337 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn" (OuterVolumeSpecName: "kube-api-access-ttfwn") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "kube-api-access-ttfwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.557047 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util" (OuterVolumeSpecName: "util") pod "b6835994-86f5-4950-b010-780530fceffe" (UID: "b6835994-86f5-4950-b010-780530fceffe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645015 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645062 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttfwn\" (UniqueName: \"kubernetes.io/projected/b6835994-86f5-4950-b010-780530fceffe-kube-api-access-ttfwn\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:47 crc kubenswrapper[4763]: I0131 15:09:47.645077 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6835994-86f5-4950-b010-780530fceffe-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004643 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr" event={"ID":"b6835994-86f5-4950-b010-780530fceffe","Type":"ContainerDied","Data":"e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96"} Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.004784 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09e065931494d3ba676b6c94d088c34d670811ef1849e321bdfdd21bb343d96" Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.009136 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967" exitCode=0 Jan 31 15:09:48 crc kubenswrapper[4763]: I0131 15:09:48.009164 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967"} Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.018271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerStarted","Data":"353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f"} Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.038461 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lf9s" podStartSLOduration=2.504772579 podStartE2EDuration="5.038440601s" podCreationTimestamp="2026-01-31 15:09:44 +0000 UTC" firstStartedPulling="2026-01-31 15:09:45.988371821 +0000 UTC m=+905.743110124" lastFinishedPulling="2026-01-31 15:09:48.522039853 +0000 UTC m=+908.276778146" observedRunningTime="2026-01-31 15:09:49.035355599 +0000 UTC m=+908.790093892" watchObservedRunningTime="2026-01-31 15:09:49.038440601 +0000 UTC m=+908.793178894" Jan 31 15:09:49 crc kubenswrapper[4763]: I0131 15:09:49.233035 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-2" podUID="cd0d5ccb-1d59-428e-9a53-17427cd0e5dc" containerName="galera" probeResult="failure" output=< Jan 31 15:09:49 crc kubenswrapper[4763]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 15:09:49 crc kubenswrapper[4763]: > Jan 31 15:09:53 crc kubenswrapper[4763]: I0131 15:09:53.981066 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.093405 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.561557 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562119 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562137 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562158 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562166 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="util" Jan 31 15:09:54 crc kubenswrapper[4763]: E0131 15:09:54.562184 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562194 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="pull" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562341 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6835994-86f5-4950-b010-780530fceffe" containerName="extract" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.562834 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.565311 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-l92vh" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.573265 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.732498 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.834235 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.858523 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmq4w\" (UniqueName: \"kubernetes.io/projected/8225c1b7-e70c-4eac-8c03-c85f86ccba6b-kube-api-access-tmq4w\") pod \"rabbitmq-cluster-operator-779fc9694b-2ltrp\" (UID: \"8225c1b7-e70c-4eac-8c03-c85f86ccba6b\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:54 crc kubenswrapper[4763]: I0131 15:09:54.877822 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.058327 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.058581 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:09:55 crc kubenswrapper[4763]: I0131 15:09:55.301165 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp"] Jan 31 15:09:55 crc kubenswrapper[4763]: W0131 15:09:55.302432 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8225c1b7_e70c_4eac_8c03_c85f86ccba6b.slice/crio-d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9 WatchSource:0}: Error finding container d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9: Status 404 returned error can't find the container with id d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9 Jan 31 15:09:56 crc kubenswrapper[4763]: I0131 15:09:56.059004 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" event={"ID":"8225c1b7-e70c-4eac-8c03-c85f86ccba6b","Type":"ContainerStarted","Data":"d103da8d061f8549ac02ebbf2ceea955df40ece3aa53193ad4ff0a72f1621ad9"} Jan 31 15:09:56 crc kubenswrapper[4763]: I0131 15:09:56.101735 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7lf9s" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" probeResult="failure" output=< Jan 31 15:09:56 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 15:09:56 crc kubenswrapper[4763]: > Jan 31 15:09:57 crc kubenswrapper[4763]: I0131 15:09:57.519424 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:57 crc kubenswrapper[4763]: I0131 15:09:57.580619 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Jan 31 15:09:59 crc kubenswrapper[4763]: I0131 15:09:59.082565 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" event={"ID":"8225c1b7-e70c-4eac-8c03-c85f86ccba6b","Type":"ContainerStarted","Data":"2324d8a9ed8f2c44f3ad6220287cb0c518cf7ffab76c02c0cfa9b226cf105495"} Jan 31 15:09:59 crc kubenswrapper[4763]: I0131 15:09:59.106398 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-2ltrp" podStartSLOduration=1.658153131 podStartE2EDuration="5.10636792s" podCreationTimestamp="2026-01-31 15:09:54 +0000 UTC" firstStartedPulling="2026-01-31 15:09:55.304776742 +0000 UTC m=+915.059515035" lastFinishedPulling="2026-01-31 15:09:58.752991511 +0000 UTC m=+918.507729824" observedRunningTime="2026-01-31 15:09:59.095068662 +0000 UTC m=+918.849806985" watchObservedRunningTime="2026-01-31 15:09:59.10636792 +0000 UTC m=+918.861106243" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.588208 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.589652 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.591950 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592014 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592178 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592737 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-r5k9h" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.592882 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.612856 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739516 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739586 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739777 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739896 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739940 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739965 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.739988 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.740018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841246 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841324 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841369 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841389 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841416 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841438 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.841478 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.842110 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.842315 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.844376 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dee0d43f-8ff0-4094-9833-92cda38ee182-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.848261 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dee0d43f-8ff0-4094-9833-92cda38ee182-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849072 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849889 4763 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.849915 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5d563a84855d70dea8af3b1d20a098856ea2dfb7695437899fa690ba4d17c18/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.856651 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr98\" (UniqueName: \"kubernetes.io/projected/dee0d43f-8ff0-4094-9833-92cda38ee182-kube-api-access-4cr98\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.867477 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dee0d43f-8ff0-4094-9833-92cda38ee182-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.872219 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b0c2808c-e218-4e2e-a8c1-f733c80242ac\") pod \"rabbitmq-server-0\" (UID: \"dee0d43f-8ff0-4094-9833-92cda38ee182\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:01 crc kubenswrapper[4763]: I0131 15:10:01.922455 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:02 crc kubenswrapper[4763]: I0131 15:10:02.391105 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 31 15:10:02 crc kubenswrapper[4763]: W0131 15:10:02.399936 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee0d43f_8ff0_4094_9833_92cda38ee182.slice/crio-16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803 WatchSource:0}: Error finding container 16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803: Status 404 returned error can't find the container with id 16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803 Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.113144 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"16302b32ea7a0164c4878e259db72d48663e1b565d6f4b36531750d5e18e8803"} Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.131125 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.132148 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.135198 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-rs9zg" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.139614 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.282620 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.384085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.404647 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"keystone-operator-index-k7dfb\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.508651 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:03 crc kubenswrapper[4763]: I0131 15:10:03.978664 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:04 crc kubenswrapper[4763]: I0131 15:10:04.120953 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerStarted","Data":"a6060687b04652b9a0f6ae1dd3967200dafd729292b1e7995df8cfdf4f628cac"} Jan 31 15:10:05 crc kubenswrapper[4763]: I0131 15:10:05.121048 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:05 crc kubenswrapper[4763]: I0131 15:10:05.195522 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:07 crc kubenswrapper[4763]: I0131 15:10:07.929491 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.730058 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.731314 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.742253 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.864830 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.966515 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:08 crc kubenswrapper[4763]: I0131 15:10:08.997726 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrb7\" (UniqueName: \"kubernetes.io/projected/191c97ac-f003-4a51-8f06-395adf3ac8a7-kube-api-access-wsrb7\") pod \"keystone-operator-index-njgcq\" (UID: \"191c97ac-f003-4a51-8f06-395adf3ac8a7\") " pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.088918 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.167647 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerStarted","Data":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.167968 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-k7dfb" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" containerID="cri-o://1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" gracePeriod=2 Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.192670 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-k7dfb" podStartSLOduration=2.314011014 podStartE2EDuration="6.192630118s" podCreationTimestamp="2026-01-31 15:10:03 +0000 UTC" firstStartedPulling="2026-01-31 15:10:03.992598637 +0000 UTC m=+923.747336930" lastFinishedPulling="2026-01-31 15:10:07.871217741 +0000 UTC m=+927.625956034" observedRunningTime="2026-01-31 15:10:09.191289052 +0000 UTC m=+928.946027365" watchObservedRunningTime="2026-01-31 15:10:09.192630118 +0000 UTC m=+928.947368421" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.573819 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-njgcq"] Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.615175 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.780184 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") pod \"15e174ce-d52f-4b1f-a00f-97624902794c\" (UID: \"15e174ce-d52f-4b1f-a00f-97624902794c\") " Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.792885 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2" (OuterVolumeSpecName: "kube-api-access-hf5t2") pod "15e174ce-d52f-4b1f-a00f-97624902794c" (UID: "15e174ce-d52f-4b1f-a00f-97624902794c"). InnerVolumeSpecName "kube-api-access-hf5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.881264 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf5t2\" (UniqueName: \"kubernetes.io/projected/15e174ce-d52f-4b1f-a00f-97624902794c-kube-api-access-hf5t2\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.927002 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:09 crc kubenswrapper[4763]: I0131 15:10:09.927320 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lf9s" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" containerID="cri-o://353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" gracePeriod=2 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.176393 4763 generic.go:334] "Generic (PLEG): container finished" podID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerID="353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" exitCode=0 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.176638 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.177648 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180139 4763 generic.go:334] "Generic (PLEG): container finished" podID="15e174ce-d52f-4b1f-a00f-97624902794c" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" exitCode=0 Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180187 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerDied","Data":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180203 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-k7dfb" event={"ID":"15e174ce-d52f-4b1f-a00f-97624902794c","Type":"ContainerDied","Data":"a6060687b04652b9a0f6ae1dd3967200dafd729292b1e7995df8cfdf4f628cac"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180209 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-k7dfb" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.180219 4763 scope.go:117] "RemoveContainer" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.187336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-njgcq" event={"ID":"191c97ac-f003-4a51-8f06-395adf3ac8a7","Type":"ContainerStarted","Data":"44e02b26a38394eea42daed94a5d4412cc2b2eabb36f27f29c55f4fd859fae18"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.187368 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-njgcq" event={"ID":"191c97ac-f003-4a51-8f06-395adf3ac8a7","Type":"ContainerStarted","Data":"e4ca04660bfdcdcc2db1a3f893727cca54819a87089ba2336ca91feb5dda7059"} Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.214946 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-njgcq" podStartSLOduration=1.806248966 podStartE2EDuration="2.214930958s" podCreationTimestamp="2026-01-31 15:10:08 +0000 UTC" firstStartedPulling="2026-01-31 15:10:09.597870758 +0000 UTC m=+929.352609051" lastFinishedPulling="2026-01-31 15:10:10.00655274 +0000 UTC m=+929.761291043" observedRunningTime="2026-01-31 15:10:10.21388214 +0000 UTC m=+929.968620433" watchObservedRunningTime="2026-01-31 15:10:10.214930958 +0000 UTC m=+929.969669251" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.217490 4763 scope.go:117] "RemoveContainer" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: E0131 15:10:10.218428 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": container with ID starting with 1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647 not found: ID does not exist" containerID="1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.218483 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647"} err="failed to get container status \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": rpc error: code = NotFound desc = could not find container \"1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647\": container with ID starting with 1e0db8528a0a9866c9254bb8b0bea0592be92d01f5d41073949b5dce4d31f647 not found: ID does not exist" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.236636 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.244446 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-k7dfb"] Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.368815 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.489689 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490091 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490161 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") pod \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\" (UID: \"92768ea9-03ef-4d26-8cdf-dfc9f45575be\") " Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.490997 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities" (OuterVolumeSpecName: "utilities") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.495667 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc" (OuterVolumeSpecName: "kube-api-access-c6hqc") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "kube-api-access-c6hqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.592979 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.593031 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6hqc\" (UniqueName: \"kubernetes.io/projected/92768ea9-03ef-4d26-8cdf-dfc9f45575be-kube-api-access-c6hqc\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.612986 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92768ea9-03ef-4d26-8cdf-dfc9f45575be" (UID: "92768ea9-03ef-4d26-8cdf-dfc9f45575be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:10 crc kubenswrapper[4763]: I0131 15:10:10.694440 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92768ea9-03ef-4d26-8cdf-dfc9f45575be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.049662 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" path="/var/lib/kubelet/pods/15e174ce-d52f-4b1f-a00f-97624902794c/volumes" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196272 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lf9s" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196288 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lf9s" event={"ID":"92768ea9-03ef-4d26-8cdf-dfc9f45575be","Type":"ContainerDied","Data":"cf5fe5ed60cf52bb8acb61df24499edf923c28d5275c3d448151c826d9a8c944"} Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.196359 4763 scope.go:117] "RemoveContainer" containerID="353034f3575668335c53a61718a3785c11949a3c5bc89f04bc4c7b1cdde9725f" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.216490 4763 scope.go:117] "RemoveContainer" containerID="5a540798457e4c3c700e89a882a83e3778cea51f9eae6a9767248d8876ca9967" Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.218730 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.223817 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lf9s"] Jan 31 15:10:11 crc kubenswrapper[4763]: I0131 15:10:11.237246 4763 scope.go:117] "RemoveContainer" containerID="073e38d7a622a2b02edd186fff7b67bf5dff918525a27c9bdca3bafb1385dea0" Jan 31 15:10:13 crc kubenswrapper[4763]: I0131 15:10:13.050672 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" path="/var/lib/kubelet/pods/92768ea9-03ef-4d26-8cdf-dfc9f45575be/volumes" Jan 31 15:10:14 crc kubenswrapper[4763]: I0131 15:10:14.177150 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:10:14 crc kubenswrapper[4763]: I0131 15:10:14.177259 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.090377 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.091043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.125765 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:19 crc kubenswrapper[4763]: I0131 15:10:19.285511 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-njgcq" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.838373 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839042 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839058 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839071 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839077 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839092 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-content" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839099 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-content" Jan 31 15:10:22 crc kubenswrapper[4763]: E0131 15:10:22.839111 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-utilities" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839118 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="extract-utilities" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839233 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="92768ea9-03ef-4d26-8cdf-dfc9f45575be" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.839247 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e174ce-d52f-4b1f-a00f-97624902794c" containerName="registry-server" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.840301 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.844204 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:10:22 crc kubenswrapper[4763]: I0131 15:10:22.859653 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.010860 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.010966 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.011071 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112029 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112106 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112152 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112675 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.112749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.135169 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.165194 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:23 crc kubenswrapper[4763]: I0131 15:10:23.611421 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f"] Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291782 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="a988c20b2fc824dfa2187fb0c43dcc497e6078364f55152dc25f4fb7f4a11a13" exitCode=0 Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291868 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"a988c20b2fc824dfa2187fb0c43dcc497e6078364f55152dc25f4fb7f4a11a13"} Jan 31 15:10:24 crc kubenswrapper[4763]: I0131 15:10:24.291940 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerStarted","Data":"c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d"} Jan 31 15:10:26 crc kubenswrapper[4763]: I0131 15:10:26.306832 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="b117fd5ccecec46af5a92a0f70fa14891623c7f571bb856a356307bbb4cbe941" exitCode=0 Jan 31 15:10:26 crc kubenswrapper[4763]: I0131 15:10:26.306866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"b117fd5ccecec46af5a92a0f70fa14891623c7f571bb856a356307bbb4cbe941"} Jan 31 15:10:27 crc kubenswrapper[4763]: I0131 15:10:27.320112 4763 generic.go:334] "Generic (PLEG): container finished" podID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerID="61146e2d5ceacdad075aedad451af4039443c708b570e2c5cab365b417714e6c" exitCode=0 Jan 31 15:10:27 crc kubenswrapper[4763]: I0131 15:10:27.320182 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"61146e2d5ceacdad075aedad451af4039443c708b570e2c5cab365b417714e6c"} Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.696531 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789762 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789900 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.789934 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") pod \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\" (UID: \"6fb80892-b089-4dff-baa8-44ffdf6b9b84\") " Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.791130 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle" (OuterVolumeSpecName: "bundle") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.796629 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p" (OuterVolumeSpecName: "kube-api-access-58z8p") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "kube-api-access-58z8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.821541 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util" (OuterVolumeSpecName: "util") pod "6fb80892-b089-4dff-baa8-44ffdf6b9b84" (UID: "6fb80892-b089-4dff-baa8-44ffdf6b9b84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891360 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891418 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fb80892-b089-4dff-baa8-44ffdf6b9b84-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:28 crc kubenswrapper[4763]: I0131 15:10:28.891428 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58z8p\" (UniqueName: \"kubernetes.io/projected/6fb80892-b089-4dff-baa8-44ffdf6b9b84-kube-api-access-58z8p\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" event={"ID":"6fb80892-b089-4dff-baa8-44ffdf6b9b84","Type":"ContainerDied","Data":"c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d"} Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338169 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f5aa5759e18f6ecd4208338f7fca67fd1ed594e63c10e05f827e9d6840bf6d" Jan 31 15:10:29 crc kubenswrapper[4763]: I0131 15:10:29.338210 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383134 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383638 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="util" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383652 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="util" Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383675 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383683 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: E0131 15:10:40.383710 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="pull" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383716 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="pull" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.383848 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb80892-b089-4dff-baa8-44ffdf6b9b84" containerName="extract" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.384460 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.387035 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m2hdc" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.392043 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.394608 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.549987 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.550052 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.550335 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651568 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651631 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.651789 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.658611 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-apiservice-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.660197 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/970b855e-e278-4e6b-b9ba-733f8f798f59-webhook-cert\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.677512 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z4nz\" (UniqueName: \"kubernetes.io/projected/970b855e-e278-4e6b-b9ba-733f8f798f59-kube-api-access-6z4nz\") pod \"keystone-operator-controller-manager-7598465c56-xt6m7\" (UID: \"970b855e-e278-4e6b-b9ba-733f8f798f59\") " pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:40 crc kubenswrapper[4763]: I0131 15:10:40.700607 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.211414 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7"] Jan 31 15:10:41 crc kubenswrapper[4763]: W0131 15:10:41.240320 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970b855e_e278_4e6b_b9ba_733f8f798f59.slice/crio-93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74 WatchSource:0}: Error finding container 93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74: Status 404 returned error can't find the container with id 93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74 Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.423663 4763 generic.go:334] "Generic (PLEG): container finished" podID="dee0d43f-8ff0-4094-9833-92cda38ee182" containerID="6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc" exitCode=0 Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.423787 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerDied","Data":"6811b875a2287d4113a9be20ece01fcce4b87486deb0296107994f35e34ccfcc"} Jan 31 15:10:41 crc kubenswrapper[4763]: I0131 15:10:41.425531 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" event={"ID":"970b855e-e278-4e6b-b9ba-733f8f798f59","Type":"ContainerStarted","Data":"93fdfe6ee228b8c9acf7872bab934f86017e6a84f9785022dd9b65f97c76da74"} Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.488389 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"dee0d43f-8ff0-4094-9833-92cda38ee182","Type":"ContainerStarted","Data":"d8cfe849fe563d5150ded6223866da779040c7b0ad70c45e5be36992721a721c"} Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.488626 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:42 crc kubenswrapper[4763]: I0131 15:10:42.564723 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.475642595 podStartE2EDuration="42.564675132s" podCreationTimestamp="2026-01-31 15:10:00 +0000 UTC" firstStartedPulling="2026-01-31 15:10:02.402833298 +0000 UTC m=+922.157571601" lastFinishedPulling="2026-01-31 15:10:08.491865825 +0000 UTC m=+928.246604138" observedRunningTime="2026-01-31 15:10:42.563160182 +0000 UTC m=+962.317898475" watchObservedRunningTime="2026-01-31 15:10:42.564675132 +0000 UTC m=+962.319413435" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.177189 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.177655 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.188715 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.189260 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.189314 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" gracePeriod=600 Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505269 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" exitCode=0 Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e"} Jan 31 15:10:44 crc kubenswrapper[4763]: I0131 15:10:44.505337 4763 scope.go:117] "RemoveContainer" containerID="b6da57a4479d9d9e6a720f55c269fed96eb09e97fe8b846af0fb3bb3cfb46085" Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.512463 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.514873 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" event={"ID":"970b855e-e278-4e6b-b9ba-733f8f798f59","Type":"ContainerStarted","Data":"b0efb0ccd0af22ccdb931385cfe59c4525e9c4b4102052e89268cfbb08e6d64c"} Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.514993 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:45 crc kubenswrapper[4763]: I0131 15:10:45.553831 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" podStartSLOduration=2.439723111 podStartE2EDuration="5.553811528s" podCreationTimestamp="2026-01-31 15:10:40 +0000 UTC" firstStartedPulling="2026-01-31 15:10:41.244256143 +0000 UTC m=+960.998994476" lastFinishedPulling="2026-01-31 15:10:44.3583446 +0000 UTC m=+964.113082893" observedRunningTime="2026-01-31 15:10:45.549832722 +0000 UTC m=+965.304571055" watchObservedRunningTime="2026-01-31 15:10:45.553811528 +0000 UTC m=+965.308549821" Jan 31 15:10:50 crc kubenswrapper[4763]: I0131 15:10:50.707901 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7598465c56-xt6m7" Jan 31 15:10:51 crc kubenswrapper[4763]: I0131 15:10:51.925884 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.232246 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.233414 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.235353 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.240162 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.241075 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.248493 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.261068 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364306 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364400 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364462 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.364614 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465713 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465789 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465853 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.465891 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.466461 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.466620 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.489728 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"keystone-db-create-xn2nh\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.490542 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"keystone-38d2-account-create-update-cpmns\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.595897 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:56 crc kubenswrapper[4763]: I0131 15:10:56.606995 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.026129 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.096465 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.603722 4763 generic.go:334] "Generic (PLEG): container finished" podID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerID="6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e" exitCode=0 Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.604080 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerDied","Data":"6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.604115 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerStarted","Data":"eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605299 4763 generic.go:334] "Generic (PLEG): container finished" podID="1994b227-dbc6-494a-886d-4573eee02640" containerID="22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89" exitCode=0 Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605327 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerDied","Data":"22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89"} Jan 31 15:10:57 crc kubenswrapper[4763]: I0131 15:10:57.605341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerStarted","Data":"159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.045195 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.050997 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101098 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") pod \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101281 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") pod \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\" (UID: \"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101346 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") pod \"1994b227-dbc6-494a-886d-4573eee02640\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101401 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") pod \"1994b227-dbc6-494a-886d-4573eee02640\" (UID: \"1994b227-dbc6-494a-886d-4573eee02640\") " Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101907 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" (UID: "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101914 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1994b227-dbc6-494a-886d-4573eee02640" (UID: "1994b227-dbc6-494a-886d-4573eee02640"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.101985 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.102005 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b227-dbc6-494a-886d-4573eee02640-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.109170 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d" (OuterVolumeSpecName: "kube-api-access-m2g6d") pod "1994b227-dbc6-494a-886d-4573eee02640" (UID: "1994b227-dbc6-494a-886d-4573eee02640"). InnerVolumeSpecName "kube-api-access-m2g6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.115021 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz" (OuterVolumeSpecName: "kube-api-access-f97vz") pod "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" (UID: "4a1a2199-bf73-476a-8a6b-c50b1c26aa6c"). InnerVolumeSpecName "kube-api-access-f97vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.140667 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: E0131 15:10:59.140984 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141004 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: E0131 15:10:59.141032 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141038 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141134 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="1994b227-dbc6-494a-886d-4573eee02640" containerName="mariadb-database-create" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141152 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" containerName="mariadb-account-create-update" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.141572 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.144296 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-5rn7l" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.148409 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204539 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204621 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f97vz\" (UniqueName: \"kubernetes.io/projected/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c-kube-api-access-f97vz\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.204639 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2g6d\" (UniqueName: \"kubernetes.io/projected/1994b227-dbc6-494a-886d-4573eee02640-kube-api-access-m2g6d\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.306085 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.337473 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wrxw\" (UniqueName: \"kubernetes.io/projected/ef84b681-2ea6-4684-84c0-6d452a5b47df-kube-api-access-5wrxw\") pod \"barbican-operator-index-2w984\" (UID: \"ef84b681-2ea6-4684-84c0-6d452a5b47df\") " pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.465428 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.631832 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" event={"ID":"4a1a2199-bf73-476a-8a6b-c50b1c26aa6c","Type":"ContainerDied","Data":"eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.632070 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee91691610658059d63311404e674f57b1539ec80ce3317c767f9c0bed213fc" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.632136 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-38d2-account-create-update-cpmns" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641618 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-xn2nh" event={"ID":"1994b227-dbc6-494a-886d-4573eee02640","Type":"ContainerDied","Data":"159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd"} Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641656 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159103466e2db66e884cd684ba7173a176780e2f7a9849d363b643f121180ccd" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.641764 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-xn2nh" Jan 31 15:10:59 crc kubenswrapper[4763]: I0131 15:10:59.896355 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-2w984"] Jan 31 15:10:59 crc kubenswrapper[4763]: W0131 15:10:59.903174 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef84b681_2ea6_4684_84c0_6d452a5b47df.slice/crio-607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762 WatchSource:0}: Error finding container 607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762: Status 404 returned error can't find the container with id 607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762 Jan 31 15:11:00 crc kubenswrapper[4763]: I0131 15:11:00.651785 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-2w984" event={"ID":"ef84b681-2ea6-4684-84c0-6d452a5b47df","Type":"ContainerStarted","Data":"607bbf8ff5faec621683cf864459e22b972f3e71cd8ec83e52d3481a3dda7762"} Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.659407 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-2w984" event={"ID":"ef84b681-2ea6-4684-84c0-6d452a5b47df","Type":"ContainerStarted","Data":"ba02d9d5067b6cfc2c84bd386cf6d1363bb1b4a505c72bd10a8bd2b21693417d"} Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.683484 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-2w984" podStartSLOduration=1.844176913 podStartE2EDuration="2.683462546s" podCreationTimestamp="2026-01-31 15:10:59 +0000 UTC" firstStartedPulling="2026-01-31 15:10:59.905291718 +0000 UTC m=+979.660030011" lastFinishedPulling="2026-01-31 15:11:00.744577341 +0000 UTC m=+980.499315644" observedRunningTime="2026-01-31 15:11:01.683310962 +0000 UTC m=+981.438049265" watchObservedRunningTime="2026-01-31 15:11:01.683462546 +0000 UTC m=+981.438200839" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.799777 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.800481 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.801903 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.802641 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.802785 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.805058 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.811938 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.846434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.846784 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.947760 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.947845 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.953557 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:01 crc kubenswrapper[4763]: I0131 15:11:01.963282 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"keystone-db-sync-cfz59\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.120452 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.599930 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:11:02 crc kubenswrapper[4763]: W0131 15:11:02.603367 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod576fbbd2_e600_40a9_95f4_2772c96807f1.slice/crio-092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5 WatchSource:0}: Error finding container 092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5: Status 404 returned error can't find the container with id 092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5 Jan 31 15:11:02 crc kubenswrapper[4763]: I0131 15:11:02.665435 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerStarted","Data":"092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5"} Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.466540 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.468048 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.507760 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:09 crc kubenswrapper[4763]: I0131 15:11:09.768604 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-2w984" Jan 31 15:11:12 crc kubenswrapper[4763]: I0131 15:11:12.749427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerStarted","Data":"d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e"} Jan 31 15:11:12 crc kubenswrapper[4763]: I0131 15:11:12.765374 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-cfz59" podStartSLOduration=2.077762499 podStartE2EDuration="11.765353588s" podCreationTimestamp="2026-01-31 15:11:01 +0000 UTC" firstStartedPulling="2026-01-31 15:11:02.605394844 +0000 UTC m=+982.360133137" lastFinishedPulling="2026-01-31 15:11:12.292985923 +0000 UTC m=+992.047724226" observedRunningTime="2026-01-31 15:11:12.763328935 +0000 UTC m=+992.518067248" watchObservedRunningTime="2026-01-31 15:11:12.765353588 +0000 UTC m=+992.520091921" Jan 31 15:11:15 crc kubenswrapper[4763]: I0131 15:11:15.772446 4763 generic.go:334] "Generic (PLEG): container finished" podID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerID="d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e" exitCode=0 Jan 31 15:11:15 crc kubenswrapper[4763]: I0131 15:11:15.772555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerDied","Data":"d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e"} Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.064779 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.198511 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") pod \"576fbbd2-e600-40a9-95f4-2772c96807f1\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.198687 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") pod \"576fbbd2-e600-40a9-95f4-2772c96807f1\" (UID: \"576fbbd2-e600-40a9-95f4-2772c96807f1\") " Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.203642 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58" (OuterVolumeSpecName: "kube-api-access-pxl58") pod "576fbbd2-e600-40a9-95f4-2772c96807f1" (UID: "576fbbd2-e600-40a9-95f4-2772c96807f1"). InnerVolumeSpecName "kube-api-access-pxl58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.244947 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data" (OuterVolumeSpecName: "config-data") pod "576fbbd2-e600-40a9-95f4-2772c96807f1" (UID: "576fbbd2-e600-40a9-95f4-2772c96807f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.300586 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/576fbbd2-e600-40a9-95f4-2772c96807f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.300618 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxl58\" (UniqueName: \"kubernetes.io/projected/576fbbd2-e600-40a9-95f4-2772c96807f1-kube-api-access-pxl58\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.775472 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:17 crc kubenswrapper[4763]: E0131 15:11:17.776155 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.776194 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.776615 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" containerName="keystone-db-sync" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.781465 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797349 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-cfz59" event={"ID":"576fbbd2-e600-40a9-95f4-2772c96807f1","Type":"ContainerDied","Data":"092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5"} Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797419 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="092f98dde1cbfd08cabc98509525e608a932af9160563cb66a514af33b160da5" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.797471 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-cfz59" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.800517 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.819681 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.909849 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.910089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.910203 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.982689 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.983962 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.989578 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.990137 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.990943 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.991009 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Jan 31 15:11:17 crc kubenswrapper[4763]: I0131 15:11:17.992411 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.004158 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011569 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011613 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.011640 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.012074 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.012307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.034210 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113559 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113752 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.113935 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.114023 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.114150 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.132324 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.215940 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.215998 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216031 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216073 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.216109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.220005 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.220361 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.221358 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.221987 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.237307 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"keystone-bootstrap-tjb5m\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.309295 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.638896 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck"] Jan 31 15:11:18 crc kubenswrapper[4763]: W0131 15:11:18.642429 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82458dee_ae6f_46c9_ac1b_745146c8b9bf.slice/crio-25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b WatchSource:0}: Error finding container 25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b: Status 404 returned error can't find the container with id 25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.728572 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.805049 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerStarted","Data":"25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b"} Jan 31 15:11:18 crc kubenswrapper[4763]: I0131 15:11:18.806618 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerStarted","Data":"a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.818014 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerStarted","Data":"411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.821160 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="c99bff41e355c4dcb44e21f9886148266110b133cde0aeb7743a27e9d27a9b88" exitCode=0 Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.821204 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"c99bff41e355c4dcb44e21f9886148266110b133cde0aeb7743a27e9d27a9b88"} Jan 31 15:11:19 crc kubenswrapper[4763]: I0131 15:11:19.846473 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" podStartSLOduration=2.846453356 podStartE2EDuration="2.846453356s" podCreationTimestamp="2026-01-31 15:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:11:19.84241793 +0000 UTC m=+999.597156253" watchObservedRunningTime="2026-01-31 15:11:19.846453356 +0000 UTC m=+999.601191659" Jan 31 15:11:21 crc kubenswrapper[4763]: I0131 15:11:21.837690 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="a50eb69c245fbaf455b79ed7afb47369d0c54938bbb5415939e372cfafbbcbe4" exitCode=0 Jan 31 15:11:21 crc kubenswrapper[4763]: I0131 15:11:21.837815 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"a50eb69c245fbaf455b79ed7afb47369d0c54938bbb5415939e372cfafbbcbe4"} Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.847538 4763 generic.go:334] "Generic (PLEG): container finished" podID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerID="411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59" exitCode=0 Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.847620 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerDied","Data":"411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59"} Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.850772 4763 generic.go:334] "Generic (PLEG): container finished" podID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerID="1d3c6f773d8fd7f2fd40aba8f0ca445f528da2d1b35ad1ac37b8569b4236bbf8" exitCode=0 Jan 31 15:11:22 crc kubenswrapper[4763]: I0131 15:11:22.850796 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"1d3c6f773d8fd7f2fd40aba8f0ca445f528da2d1b35ad1ac37b8569b4236bbf8"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.275940 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.283544 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406428 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406565 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406616 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406712 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406768 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406802 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") pod \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\" (UID: \"82458dee-ae6f-46c9-ac1b-745146c8b9bf\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.406831 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") pod \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\" (UID: \"9b3e2d68-7406-4653-85ed-41746d3a6ea7\") " Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.409165 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle" (OuterVolumeSpecName: "bundle") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412242 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv" (OuterVolumeSpecName: "kube-api-access-xkltv") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "kube-api-access-xkltv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412327 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.412493 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts" (OuterVolumeSpecName: "scripts") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.413259 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.413954 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb" (OuterVolumeSpecName: "kube-api-access-mz5fb") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "kube-api-access-mz5fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.435443 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data" (OuterVolumeSpecName: "config-data") pod "9b3e2d68-7406-4653-85ed-41746d3a6ea7" (UID: "9b3e2d68-7406-4653-85ed-41746d3a6ea7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509195 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5fb\" (UniqueName: \"kubernetes.io/projected/82458dee-ae6f-46c9-ac1b-745146c8b9bf-kube-api-access-mz5fb\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509237 4763 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509264 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509277 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509290 4763 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509303 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3e2d68-7406-4653-85ed-41746d3a6ea7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.509319 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkltv\" (UniqueName: \"kubernetes.io/projected/9b3e2d68-7406-4653-85ed-41746d3a6ea7-kube-api-access-xkltv\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.804276 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util" (OuterVolumeSpecName: "util") pod "82458dee-ae6f-46c9-ac1b-745146c8b9bf" (UID: "82458dee-ae6f-46c9-ac1b-745146c8b9bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.814901 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82458dee-ae6f-46c9-ac1b-745146c8b9bf-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.873349 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.874265 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-tjb5m" event={"ID":"9b3e2d68-7406-4653-85ed-41746d3a6ea7","Type":"ContainerDied","Data":"a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.874358 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a96a11be8a6ffc55d8b5833c9e559c88fe1548a90837d48887b0742767b1be95" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878335 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" event={"ID":"82458dee-ae6f-46c9-ac1b-745146c8b9bf","Type":"ContainerDied","Data":"25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b"} Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878375 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c3f401ab83dd80391b3e9558c432539ccdd0588556306daaa7f4287ca7bd8b" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.878418 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998490 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998778 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998789 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998807 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="pull" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998813 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="pull" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998820 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="util" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998826 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="util" Jan 31 15:11:24 crc kubenswrapper[4763]: E0131 15:11:24.998835 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998841 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998954 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="82458dee-ae6f-46c9-ac1b-745146c8b9bf" containerName="extract" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.998974 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" containerName="keystone-bootstrap" Jan 31 15:11:24 crc kubenswrapper[4763]: I0131 15:11:24.999479 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003094 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003417 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.003559 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.005708 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-skmvm" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.022582 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124877 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124923 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124951 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.124978 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.125043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226247 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226326 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226363 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226385 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.226417 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.231679 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-fernet-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.232410 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-scripts\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.233156 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-credential-keys\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.234793 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/791f5002-b2b5-488c-99c8-5ed511cffed2-config-data\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.256565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htfnp\" (UniqueName: \"kubernetes.io/projected/791f5002-b2b5-488c-99c8-5ed511cffed2-kube-api-access-htfnp\") pod \"keystone-7659668474-6698l\" (UID: \"791f5002-b2b5-488c-99c8-5ed511cffed2\") " pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.324126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.591479 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-7659668474-6698l"] Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885684 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7659668474-6698l" event={"ID":"791f5002-b2b5-488c-99c8-5ed511cffed2","Type":"ContainerStarted","Data":"b5a67cda7cedf37898174bd8c26fa8b6ca9c316498d1a188eafaf7a0a061fb31"} Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885767 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-7659668474-6698l" event={"ID":"791f5002-b2b5-488c-99c8-5ed511cffed2","Type":"ContainerStarted","Data":"ebf8be13087df49c14a90235b8a6897359f30896de4cc0d771817a61a26e48a4"} Jan 31 15:11:25 crc kubenswrapper[4763]: I0131 15:11:25.885806 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.059284 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-7659668474-6698l" podStartSLOduration=12.059267592 podStartE2EDuration="12.059267592s" podCreationTimestamp="2026-01-31 15:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:11:25.903102047 +0000 UTC m=+1005.657840350" watchObservedRunningTime="2026-01-31 15:11:36.059267592 +0000 UTC m=+1015.814005885" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.060014 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.060791 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.062564 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.064575 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9gltf" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.072719 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211604 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211935 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.211995 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313306 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313378 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.313408 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.323478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-apiservice-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.323516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-webhook-cert\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.340229 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq8x5\" (UniqueName: \"kubernetes.io/projected/8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5-kube-api-access-wq8x5\") pod \"barbican-operator-controller-manager-56656dfbf6-dtcnh\" (UID: \"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5\") " pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.390600 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.824671 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh"] Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.836141 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:11:36 crc kubenswrapper[4763]: I0131 15:11:36.982057 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" event={"ID":"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5","Type":"ContainerStarted","Data":"19a6b8b6f0d398d2b52a9e74b825a08d77af27515af66305ec6ec80ffaf21b75"} Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.004043 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" event={"ID":"8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5","Type":"ContainerStarted","Data":"f3133d68dda34dc46cfc087e9ca55b949f26235398e1875b2cdef3729b25ae32"} Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.004687 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:40 crc kubenswrapper[4763]: I0131 15:11:40.029203 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" podStartSLOduration=1.590889644 podStartE2EDuration="4.029187769s" podCreationTimestamp="2026-01-31 15:11:36 +0000 UTC" firstStartedPulling="2026-01-31 15:11:36.835819787 +0000 UTC m=+1016.590558080" lastFinishedPulling="2026-01-31 15:11:39.274117912 +0000 UTC m=+1019.028856205" observedRunningTime="2026-01-31 15:11:40.025260725 +0000 UTC m=+1019.779999028" watchObservedRunningTime="2026-01-31 15:11:40.029187769 +0000 UTC m=+1019.783926072" Jan 31 15:11:46 crc kubenswrapper[4763]: I0131 15:11:46.396513 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-56656dfbf6-dtcnh" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.626107 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.628504 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.633950 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.635292 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.636837 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.641764 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.668901 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.730658 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.730823 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.731043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.731095 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.784683 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-7659668474-6698l" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.832851 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833122 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833252 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833326 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.833968 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.834911 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.853749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"barbican-db-create-fxrtm\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.861347 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"barbican-0181-account-create-update-96pj2\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.951516 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:56 crc kubenswrapper[4763]: I0131 15:11:56.963734 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:57 crc kubenswrapper[4763]: I0131 15:11:57.480630 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:11:57 crc kubenswrapper[4763]: W0131 15:11:57.481332 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464b92bd_fb87_4fc5_aa90_5460b1e35eec.slice/crio-2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594 WatchSource:0}: Error finding container 2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594: Status 404 returned error can't find the container with id 2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594 Jan 31 15:11:57 crc kubenswrapper[4763]: I0131 15:11:57.566247 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:11:57 crc kubenswrapper[4763]: W0131 15:11:57.574569 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe55f4fd_e12f_4bcf_ab19_d71977f3e6ec.slice/crio-e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c WatchSource:0}: Error finding container e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c: Status 404 returned error can't find the container with id e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148184 4763 generic.go:334] "Generic (PLEG): container finished" podID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerID="a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441" exitCode=0 Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148278 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerDied","Data":"a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.148429 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerStarted","Data":"e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150404 4763 generic.go:334] "Generic (PLEG): container finished" podID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerID="fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df" exitCode=0 Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150456 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerDied","Data":"fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df"} Jan 31 15:11:58 crc kubenswrapper[4763]: I0131 15:11:58.150526 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerStarted","Data":"2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594"} Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.348666 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.349793 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.352424 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-mpf7p" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.371644 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.468228 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.559972 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.564575 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.569656 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.623379 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27sk4\" (UniqueName: \"kubernetes.io/projected/2c571391-06de-46b1-8932-99d44a63dc42-kube-api-access-27sk4\") pod \"swift-operator-index-h5chr\" (UID: \"2c571391-06de-46b1-8932-99d44a63dc42\") " pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.670759 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") pod \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") pod \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\" (UID: \"464b92bd-fb87-4fc5-aa90-5460b1e35eec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671163 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") pod \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671220 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") pod \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\" (UID: \"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec\") " Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671716 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "464b92bd-fb87-4fc5-aa90-5460b1e35eec" (UID: "464b92bd-fb87-4fc5-aa90-5460b1e35eec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.671786 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" (UID: "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.674471 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n" (OuterVolumeSpecName: "kube-api-access-rtw9n") pod "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" (UID: "be55f4fd-e12f-4bcf-ab19-d71977f3e6ec"). InnerVolumeSpecName "kube-api-access-rtw9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.676174 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.676898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf" (OuterVolumeSpecName: "kube-api-access-dkcmf") pod "464b92bd-fb87-4fc5-aa90-5460b1e35eec" (UID: "464b92bd-fb87-4fc5-aa90-5460b1e35eec"). InnerVolumeSpecName "kube-api-access-dkcmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772859 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtw9n\" (UniqueName: \"kubernetes.io/projected/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-kube-api-access-rtw9n\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772883 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkcmf\" (UniqueName: \"kubernetes.io/projected/464b92bd-fb87-4fc5-aa90-5460b1e35eec-kube-api-access-dkcmf\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772892 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/464b92bd-fb87-4fc5-aa90-5460b1e35eec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:11:59 crc kubenswrapper[4763]: I0131 15:11:59.772899 4763 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.109071 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-h5chr"] Jan 31 15:12:00 crc kubenswrapper[4763]: W0131 15:12:00.113476 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c571391_06de_46b1_8932_99d44a63dc42.slice/crio-7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9 WatchSource:0}: Error finding container 7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9: Status 404 returned error can't find the container with id 7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9 Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.175845 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-h5chr" event={"ID":"2c571391-06de-46b1-8932-99d44a63dc42","Type":"ContainerStarted","Data":"7d69566b93d62e3d00554e026fc47d6e3957f18bf85461fc0a22a07c1e2974d9"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178616 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-fxrtm" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178613 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-fxrtm" event={"ID":"464b92bd-fb87-4fc5-aa90-5460b1e35eec","Type":"ContainerDied","Data":"2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.178742 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9f1a613814224afc6183a529e2db1d1f86c3eb7cdbd5b9033f065ce6895594" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181605 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" event={"ID":"be55f4fd-e12f-4bcf-ab19-d71977f3e6ec","Type":"ContainerDied","Data":"e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c"} Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181645 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c361fc8437a025933c599cfb9a4239985b83b89d07f5332e2973ff8dc6920c" Jan 31 15:12:00 crc kubenswrapper[4763]: I0131 15:12:00.181795 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-0181-account-create-update-96pj2" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.907346 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:01 crc kubenswrapper[4763]: E0131 15:12:01.908041 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908052 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: E0131 15:12:01.908073 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908098 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908203 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" containerName="mariadb-database-create" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908215 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" containerName="mariadb-account-create-update" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.908610 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.910832 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.910920 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vtvb4" Jan 31 15:12:01 crc kubenswrapper[4763]: I0131 15:12:01.919254 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.018257 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.018332 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.119307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.119346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.134564 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.134597 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"barbican-db-sync-q2gqt\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:02 crc kubenswrapper[4763]: I0131 15:12:02.233148 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.056499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-q2gqt"] Jan 31 15:12:03 crc kubenswrapper[4763]: W0131 15:12:03.065477 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd76ca4ae_ac08_455d_af41_ec673a980e8e.slice/crio-ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871 WatchSource:0}: Error finding container ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871: Status 404 returned error can't find the container with id ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871 Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.213176 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerStarted","Data":"ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871"} Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.214615 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-h5chr" event={"ID":"2c571391-06de-46b1-8932-99d44a63dc42","Type":"ContainerStarted","Data":"b7621ef6d6502af209e566c99a4a7e4d26b476519cfba05da9733d997636d2d0"} Jan 31 15:12:03 crc kubenswrapper[4763]: I0131 15:12:03.230053 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-h5chr" podStartSLOduration=1.7134884879999999 podStartE2EDuration="4.230038832s" podCreationTimestamp="2026-01-31 15:11:59 +0000 UTC" firstStartedPulling="2026-01-31 15:12:00.117349562 +0000 UTC m=+1039.872087855" lastFinishedPulling="2026-01-31 15:12:02.633899906 +0000 UTC m=+1042.388638199" observedRunningTime="2026-01-31 15:12:03.227400342 +0000 UTC m=+1042.982138645" watchObservedRunningTime="2026-01-31 15:12:03.230038832 +0000 UTC m=+1042.984777115" Jan 31 15:12:07 crc kubenswrapper[4763]: I0131 15:12:07.239732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerStarted","Data":"ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9"} Jan 31 15:12:07 crc kubenswrapper[4763]: I0131 15:12:07.263118 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" podStartSLOduration=2.67715373 podStartE2EDuration="6.263086318s" podCreationTimestamp="2026-01-31 15:12:01 +0000 UTC" firstStartedPulling="2026-01-31 15:12:03.067599729 +0000 UTC m=+1042.822338012" lastFinishedPulling="2026-01-31 15:12:06.653532297 +0000 UTC m=+1046.408270600" observedRunningTime="2026-01-31 15:12:07.260225353 +0000 UTC m=+1047.014963686" watchObservedRunningTime="2026-01-31 15:12:07.263086318 +0000 UTC m=+1047.017824651" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.677021 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.677385 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:09 crc kubenswrapper[4763]: I0131 15:12:09.708075 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.265352 4763 generic.go:334] "Generic (PLEG): container finished" podID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerID="ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9" exitCode=0 Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.265478 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerDied","Data":"ad4a03306832e4128d821649efae8a9cb32add168b52d49dcd7430a5b6a1cda9"} Jan 31 15:12:10 crc kubenswrapper[4763]: I0131 15:12:10.311616 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-h5chr" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.681475 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.771971 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") pod \"d76ca4ae-ac08-455d-af41-ec673a980e8e\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.772182 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") pod \"d76ca4ae-ac08-455d-af41-ec673a980e8e\" (UID: \"d76ca4ae-ac08-455d-af41-ec673a980e8e\") " Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.778944 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg" (OuterVolumeSpecName: "kube-api-access-jh6kg") pod "d76ca4ae-ac08-455d-af41-ec673a980e8e" (UID: "d76ca4ae-ac08-455d-af41-ec673a980e8e"). InnerVolumeSpecName "kube-api-access-jh6kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.783134 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d76ca4ae-ac08-455d-af41-ec673a980e8e" (UID: "d76ca4ae-ac08-455d-af41-ec673a980e8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.874367 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh6kg\" (UniqueName: \"kubernetes.io/projected/d76ca4ae-ac08-455d-af41-ec673a980e8e-kube-api-access-jh6kg\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:11 crc kubenswrapper[4763]: I0131 15:12:11.874682 4763 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d76ca4ae-ac08-455d-af41-ec673a980e8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285568 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" event={"ID":"d76ca4ae-ac08-455d-af41-ec673a980e8e","Type":"ContainerDied","Data":"ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871"} Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285650 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea89c28319b091916cba9282193936577d192707c220da6b00d746cd490c6871" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.285675 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-q2gqt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.403858 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:12 crc kubenswrapper[4763]: E0131 15:12:12.404457 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.404499 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.404880 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76ca4ae-ac08-455d-af41-ec673a980e8e" containerName="barbican-db-sync" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.407075 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.409383 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-rrv7w" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.412025 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.482920 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.482952 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.483003 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.552847 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.554946 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.558200 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.559070 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.560088 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-vtvb4" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586490 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.586604 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.587544 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.587650 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.606520 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.609483 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.612489 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.617757 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.659401 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687640 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687687 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687733 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687756 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687799 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687821 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687848 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.687878 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.694929 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.731294 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.788993 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789060 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789087 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789105 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789127 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789162 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789189 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789752 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789814 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-logs\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.789818 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-logs\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.794093 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.794368 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.795170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-config-data-custom\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.795360 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-config-data-custom\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.811418 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfsvg\" (UniqueName: \"kubernetes.io/projected/49dd2bcf-ceb5-4df8-8a24-eec8de703f88-kube-api-access-qfsvg\") pod \"barbican-worker-5477f7cb8f-8rssm\" (UID: \"49dd2bcf-ceb5-4df8-8a24-eec8de703f88\") " pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.812028 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nln7z\" (UniqueName: \"kubernetes.io/projected/ae9bd061-c69e-4ff5-acd4-2b953c4b1657-kube-api-access-nln7z\") pod \"barbican-keystone-listener-5b8c7cdd44-cxsxt\" (UID: \"ae9bd061-c69e-4ff5-acd4-2b953c4b1657\") " pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.820748 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.821789 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.823877 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.833420 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.890915 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891011 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891034 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.891115 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.977195 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.984193 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992199 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992273 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992335 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.992355 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.993175 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-logs\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.996673 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:12 crc kubenswrapper[4763]: I0131 15:12:12.996957 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-config-data-custom\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.007652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6nj\" (UniqueName: \"kubernetes.io/projected/5d7c9f19-bf9f-4c6c-a113-a10d6be02620-kube-api-access-hl6nj\") pod \"barbican-api-697dc779fb-sgr8v\" (UID: \"5d7c9f19-bf9f-4c6c-a113-a10d6be02620\") " pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.197868 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.210208 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k"] Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.297623 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerStarted","Data":"82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98"} Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.384361 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.394170 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49dd2bcf_ceb5_4df8_8a24_eec8de703f88.slice/crio-79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6 WatchSource:0}: Error finding container 79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6: Status 404 returned error can't find the container with id 79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6 Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.451927 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.504616 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae9bd061_c69e_4ff5_acd4_2b953c4b1657.slice/crio-73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662 WatchSource:0}: Error finding container 73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662: Status 404 returned error can't find the container with id 73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662 Jan 31 15:12:13 crc kubenswrapper[4763]: I0131 15:12:13.621995 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-697dc779fb-sgr8v"] Jan 31 15:12:13 crc kubenswrapper[4763]: W0131 15:12:13.624857 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d7c9f19_bf9f_4c6c_a113_a10d6be02620.slice/crio-22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8 WatchSource:0}: Error finding container 22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8: Status 404 returned error can't find the container with id 22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8 Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.304894 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="83f481ef83a36e058ad57e8887c57a159a7b965bf32a4e8fcc3c0e9d4eb867c4" exitCode=0 Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.305507 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"83f481ef83a36e058ad57e8887c57a159a7b965bf32a4e8fcc3c0e9d4eb867c4"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.307126 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"79a9272bfc6e8c88b0d0981a9dd45e2e14d6f8213da1ce55bd59e57078037da6"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.309055 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"22bc3eac2bf0850b76d7244dae6bada86563af8adaa101368070b1a3efe3c4a8"} Jan 31 15:12:14 crc kubenswrapper[4763]: I0131 15:12:14.310237 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"73f1783cc2402a9a4768c2a16b7b7a92bcd1dd342d279be98a5ede23e82ba662"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317189 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"9107d62a1e173e6aaf93f341f03623980f36c449de1070e7f85051dd0e0ed55a"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317509 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317520 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" event={"ID":"5d7c9f19-bf9f-4c6c-a113-a10d6be02620","Type":"ContainerStarted","Data":"919605ae139098002974e02b46a5992073ecabcc697338106b310bef772d521a"} Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.317530 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:15 crc kubenswrapper[4763]: I0131 15:12:15.338724 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" podStartSLOduration=3.338707713 podStartE2EDuration="3.338707713s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:12:15.334222755 +0000 UTC m=+1055.088961048" watchObservedRunningTime="2026-01-31 15:12:15.338707713 +0000 UTC m=+1055.093446006" Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.330034 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"eb595ea3d89a26b0075353f00f22467e1266f84f62274e060d46868558049b7f"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.331493 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" event={"ID":"49dd2bcf-ceb5-4df8-8a24-eec8de703f88","Type":"ContainerStarted","Data":"d0952e31ef39c9a55d20702a5d1138af741af92ad43e99993cb10dc96469f78a"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.332413 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"9fd92a3e1c9681ddc011cc682b6fc793279b23073d079a2f0d066f3d5e188268"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.332471 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" event={"ID":"ae9bd061-c69e-4ff5-acd4-2b953c4b1657","Type":"ContainerStarted","Data":"cca922b5b3603bdd61b9f82e387d61119f809e5dc027f28836b250484884c25c"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.334251 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="9c1b6dba72d2b968ba45652a3dc1383db9c2d8a84715b8baabd77b509878687e" exitCode=0 Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.334317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"9c1b6dba72d2b968ba45652a3dc1383db9c2d8a84715b8baabd77b509878687e"} Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.354479 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-5477f7cb8f-8rssm" podStartSLOduration=1.98904389 podStartE2EDuration="4.354456589s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="2026-01-31 15:12:13.400798123 +0000 UTC m=+1053.155536416" lastFinishedPulling="2026-01-31 15:12:15.766210782 +0000 UTC m=+1055.520949115" observedRunningTime="2026-01-31 15:12:16.351186443 +0000 UTC m=+1056.105924766" watchObservedRunningTime="2026-01-31 15:12:16.354456589 +0000 UTC m=+1056.109194882" Jan 31 15:12:16 crc kubenswrapper[4763]: I0131 15:12:16.402012 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-5b8c7cdd44-cxsxt" podStartSLOduration=2.077802986 podStartE2EDuration="4.401963075s" podCreationTimestamp="2026-01-31 15:12:12 +0000 UTC" firstStartedPulling="2026-01-31 15:12:13.507824841 +0000 UTC m=+1053.262563134" lastFinishedPulling="2026-01-31 15:12:15.83198492 +0000 UTC m=+1055.586723223" observedRunningTime="2026-01-31 15:12:16.393922512 +0000 UTC m=+1056.148660875" watchObservedRunningTime="2026-01-31 15:12:16.401963075 +0000 UTC m=+1056.156701418" Jan 31 15:12:17 crc kubenswrapper[4763]: I0131 15:12:17.344287 4763 generic.go:334] "Generic (PLEG): container finished" podID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerID="cba9275b9b94971ddb5814ace082a578caac0bca4a10413edc10e664eda952ea" exitCode=0 Jan 31 15:12:17 crc kubenswrapper[4763]: I0131 15:12:17.344339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"cba9275b9b94971ddb5814ace082a578caac0bca4a10413edc10e664eda952ea"} Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.755336 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.876872 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.877164 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.877292 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") pod \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\" (UID: \"7b615184-cd97-4133-b2e4-fc44e41d1e6b\") " Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.878791 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle" (OuterVolumeSpecName: "bundle") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.885971 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl" (OuterVolumeSpecName: "kube-api-access-7hgpl") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "kube-api-access-7hgpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.966650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util" (OuterVolumeSpecName: "util") pod "7b615184-cd97-4133-b2e4-fc44e41d1e6b" (UID: "7b615184-cd97-4133-b2e4-fc44e41d1e6b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982385 4763 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-util\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982437 4763 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b615184-cd97-4133-b2e4-fc44e41d1e6b-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:18 crc kubenswrapper[4763]: I0131 15:12:18.982970 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hgpl\" (UniqueName: \"kubernetes.io/projected/7b615184-cd97-4133-b2e4-fc44e41d1e6b-kube-api-access-7hgpl\") on node \"crc\" DevicePath \"\"" Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361300 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" event={"ID":"7b615184-cd97-4133-b2e4-fc44e41d1e6b","Type":"ContainerDied","Data":"82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98"} Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361748 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82f049d2d2d9f665f4c95a5038e4786af0956c4319bb2f39dcf6a264d7aa9e98" Jan 31 15:12:19 crc kubenswrapper[4763]: I0131 15:12:19.361533 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k" Jan 31 15:12:24 crc kubenswrapper[4763]: I0131 15:12:24.460725 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:24 crc kubenswrapper[4763]: I0131 15:12:24.631502 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-697dc779fb-sgr8v" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.053561 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054311 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="util" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054325 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="util" Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054361 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="pull" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054367 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="pull" Jan 31 15:12:33 crc kubenswrapper[4763]: E0131 15:12:33.054378 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054384 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054489 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b615184-cd97-4133-b2e4-fc44e41d1e6b" containerName="extract" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.054970 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.057392 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.059416 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.069663 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bshht" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212873 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212932 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.212973 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314108 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314151 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.314187 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.319917 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-apiservice-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.326922 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42b142bb-6946-4933-841b-33c9fc9899b2-webhook-cert\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.329214 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gs9c\" (UniqueName: \"kubernetes.io/projected/42b142bb-6946-4933-841b-33c9fc9899b2-kube-api-access-9gs9c\") pod \"swift-operator-controller-manager-77769db8d5-gb8pv\" (UID: \"42b142bb-6946-4933-841b-33c9fc9899b2\") " pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.375358 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:33 crc kubenswrapper[4763]: I0131 15:12:33.822844 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv"] Jan 31 15:12:34 crc kubenswrapper[4763]: I0131 15:12:34.497535 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" event={"ID":"42b142bb-6946-4933-841b-33c9fc9899b2","Type":"ContainerStarted","Data":"fb5a37d63efd9bc239a05a8decce5fea78aaa2d4553e227aa143fbaeedf0d19b"} Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.505279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" event={"ID":"42b142bb-6946-4933-841b-33c9fc9899b2","Type":"ContainerStarted","Data":"6dd171ce46a904248293f1701ff9b40c2f4c21c03a535732238e23828992dcb5"} Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.506224 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:35 crc kubenswrapper[4763]: I0131 15:12:35.550076 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" podStartSLOduration=1.251244353 podStartE2EDuration="2.550055032s" podCreationTimestamp="2026-01-31 15:12:33 +0000 UTC" firstStartedPulling="2026-01-31 15:12:33.835145476 +0000 UTC m=+1073.589883889" lastFinishedPulling="2026-01-31 15:12:35.133956275 +0000 UTC m=+1074.888694568" observedRunningTime="2026-01-31 15:12:35.521799106 +0000 UTC m=+1075.276537429" watchObservedRunningTime="2026-01-31 15:12:35.550055032 +0000 UTC m=+1075.304793335" Jan 31 15:12:43 crc kubenswrapper[4763]: I0131 15:12:43.380662 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-77769db8d5-gb8pv" Jan 31 15:12:44 crc kubenswrapper[4763]: I0131 15:12:44.177864 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:12:44 crc kubenswrapper[4763]: I0131 15:12:44.177958 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.239875 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.246873 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.248698 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.248722 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.249218 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.251471 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-47mrl" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.263968 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435297 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435456 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435623 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435822 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.435851 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537600 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537745 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537778 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537811 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.537876 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538093 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538451 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538787 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538827 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: E0131 15:12:58.538888 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:12:59.038863053 +0000 UTC m=+1098.793601436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.538907 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.567496 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:58 crc kubenswrapper[4763]: I0131 15:12:58.569081 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.045022 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045179 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045475 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.045551 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:00.045512784 +0000 UTC m=+1099.800251077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.626674 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.628056 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.635878 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.644461 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.768899 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.768962 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769023 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769520 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.769566 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870825 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870878 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870908 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870931 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.870972 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871030 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871062 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: E0131 15:12:59.871107 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:00.371091564 +0000 UTC m=+1100.125829857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.871344 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.871465 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.875991 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:12:59 crc kubenswrapper[4763]: I0131 15:12:59.886953 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:00 crc kubenswrapper[4763]: I0131 15:13:00.073928 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074138 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074160 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.074222 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:02.074205572 +0000 UTC m=+1101.828943875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: I0131 15:13:00.378009 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378187 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378421 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:00 crc kubenswrapper[4763]: E0131 15:13:00.378477 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:01.378461875 +0000 UTC m=+1101.133200168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: I0131 15:13:01.391633 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391845 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391881 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:01 crc kubenswrapper[4763]: E0131 15:13:01.391981 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:03.391951262 +0000 UTC m=+1103.146689585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.102101 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.102682 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.102910 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: E0131 15:13:02.103113 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:06.103087487 +0000 UTC m=+1105.857825820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.326957 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.328364 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.331282 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.331729 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.349747 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408046 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408119 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408287 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408354 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408383 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.408460 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510198 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510559 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510587 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510646 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510699 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.510738 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511552 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511634 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.511688 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.516652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.519386 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.542619 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"swift-ring-rebalance-j758x\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:02 crc kubenswrapper[4763]: I0131 15:13:02.682210 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.124277 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:03 crc kubenswrapper[4763]: W0131 15:13:03.125495 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30af61f1_3271_4c8a_9da4_44fd302b135b.slice/crio-fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df WatchSource:0}: Error finding container fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df: Status 404 returned error can't find the container with id fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.431821 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.431954 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.431980 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: E0131 15:13:03.432042 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:07.432022983 +0000 UTC m=+1107.186761286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:03 crc kubenswrapper[4763]: I0131 15:13:03.754080 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerStarted","Data":"fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df"} Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.169186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169621 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169660 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: E0131 15:13:06.169745 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift podName:6c4f98e3-1507-44ae-9eb7-2247ab0e37bf nodeName:}" failed. No retries permitted until 2026-01-31 15:13:14.169724421 +0000 UTC m=+1113.924462724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift") pod "swift-storage-0" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf") : configmap "swift-ring-files" not found Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.798200 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerStarted","Data":"5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669"} Jan 31 15:13:06 crc kubenswrapper[4763]: I0131 15:13:06.829276 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" podStartSLOduration=1.668370099 podStartE2EDuration="4.829253703s" podCreationTimestamp="2026-01-31 15:13:02 +0000 UTC" firstStartedPulling="2026-01-31 15:13:03.127532924 +0000 UTC m=+1102.882271217" lastFinishedPulling="2026-01-31 15:13:06.288416508 +0000 UTC m=+1106.043154821" observedRunningTime="2026-01-31 15:13:06.823147442 +0000 UTC m=+1106.577885825" watchObservedRunningTime="2026-01-31 15:13:06.829253703 +0000 UTC m=+1106.583991996" Jan 31 15:13:07 crc kubenswrapper[4763]: I0131 15:13:07.489598 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489850 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489897 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk: configmap "swift-ring-files" not found Jan 31 15:13:07 crc kubenswrapper[4763]: E0131 15:13:07.489997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift podName:d952a9d1-9446-4003-b83c-9603f44fb634 nodeName:}" failed. No retries permitted until 2026-01-31 15:13:15.489965936 +0000 UTC m=+1115.244704269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift") pod "swift-proxy-7d8cf99555-qp8rk" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634") : configmap "swift-ring-files" not found Jan 31 15:13:12 crc kubenswrapper[4763]: I0131 15:13:12.844167 4763 generic.go:334] "Generic (PLEG): container finished" podID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerID="5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669" exitCode=0 Jan 31 15:13:12 crc kubenswrapper[4763]: I0131 15:13:12.844273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerDied","Data":"5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669"} Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.177911 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.178041 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.253186 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.265564 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"swift-storage-0\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.464931 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.913785 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:14 crc kubenswrapper[4763]: I0131 15:13:14.957455 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063068 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063410 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063519 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063549 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063589 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063658 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") pod \"30af61f1-3271-4c8a-9da4-44fd302b135b\" (UID: \"30af61f1-3271-4c8a-9da4-44fd302b135b\") " Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.063907 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.064150 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/30af61f1-3271-4c8a-9da4-44fd302b135b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.064315 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.069935 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs" (OuterVolumeSpecName: "kube-api-access-wcvvs") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "kube-api-access-wcvvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.083463 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.093553 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts" (OuterVolumeSpecName: "scripts") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.098581 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "30af61f1-3271-4c8a-9da4-44fd302b135b" (UID: "30af61f1-3271-4c8a-9da4-44fd302b135b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165546 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165586 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcvvs\" (UniqueName: \"kubernetes.io/projected/30af61f1-3271-4c8a-9da4-44fd302b135b-kube-api-access-wcvvs\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165599 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165609 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/30af61f1-3271-4c8a-9da4-44fd302b135b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.165617 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/30af61f1-3271-4c8a-9da4-44fd302b135b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500755 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" event={"ID":"30af61f1-3271-4c8a-9da4-44fd302b135b","Type":"ContainerDied","Data":"fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df"} Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500827 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb609af9f52ea6872e3a8809efddfb4a4013cd859ce7ca0dadde9eb0b414e2df" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.500792 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-j758x" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.502507 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"fc755e76f946cbf270f23b7d66505db67f9f56fbc2a1281c6fec2037e341ce02"} Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.573566 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.579767 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"swift-proxy-7d8cf99555-qp8rk\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.716640 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:15 crc kubenswrapper[4763]: I0131 15:13:15.847584 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.470499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:16 crc kubenswrapper[4763]: W0131 15:13:16.503324 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd952a9d1_9446_4003_b83c_9603f44fb634.slice/crio-c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a WatchSource:0}: Error finding container c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a: Status 404 returned error can't find the container with id c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.518068 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5"} Jan 31 15:13:16 crc kubenswrapper[4763]: I0131 15:13:16.518425 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.276826 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525845 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.525895 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerStarted","Data":"c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.526924 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.526952 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.529950 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.529970 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343"} Jan 31 15:13:17 crc kubenswrapper[4763]: I0131 15:13:17.551610 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podStartSLOduration=18.551592612 podStartE2EDuration="18.551592612s" podCreationTimestamp="2026-01-31 15:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:17.544638698 +0000 UTC m=+1117.299376991" watchObservedRunningTime="2026-01-31 15:13:17.551592612 +0000 UTC m=+1117.306330905" Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.539778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.540119 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.540135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0"} Jan 31 15:13:18 crc kubenswrapper[4763]: I0131 15:13:18.836181 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:19 crc kubenswrapper[4763]: I0131 15:13:19.557149 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.419107 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.571224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572521 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572636 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d"} Jan 31 15:13:20 crc kubenswrapper[4763]: I0131 15:13:20.572829 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.602851 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.603318 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerStarted","Data":"c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b"} Jan 31 15:13:21 crc kubenswrapper[4763]: I0131 15:13:21.653918 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=19.892535153 podStartE2EDuration="24.653898669s" podCreationTimestamp="2026-01-31 15:12:57 +0000 UTC" firstStartedPulling="2026-01-31 15:13:14.965847849 +0000 UTC m=+1114.720586142" lastFinishedPulling="2026-01-31 15:13:19.727211355 +0000 UTC m=+1119.481949658" observedRunningTime="2026-01-31 15:13:21.646414781 +0000 UTC m=+1121.401153084" watchObservedRunningTime="2026-01-31 15:13:21.653898669 +0000 UTC m=+1121.408636982" Jan 31 15:13:22 crc kubenswrapper[4763]: I0131 15:13:22.061263 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:23 crc kubenswrapper[4763]: I0131 15:13:23.628169 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.225056 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.851977 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:25 crc kubenswrapper[4763]: I0131 15:13:25.853794 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:26 crc kubenswrapper[4763]: I0131 15:13:26.822534 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:28 crc kubenswrapper[4763]: I0131 15:13:28.414670 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-j758x_30af61f1-3271-4c8a-9da4-44fd302b135b/swift-ring-rebalance/0.log" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.812014 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:29 crc kubenswrapper[4763]: E0131 15:13:29.814175 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.814405 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.814982 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" containerName="swift-ring-rebalance" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.823484 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.838295 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.844452 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.861318 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.899981 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.937823 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.948146 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-j758x"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.958917 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.959735 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.962123 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.962279 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:29 crc kubenswrapper[4763]: I0131 15:13:29.967850 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007438 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007481 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007499 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007798 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007851 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007885 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.007982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.008009 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.008030 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109726 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109792 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109868 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109914 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109931 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109947 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.109979 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110029 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110159 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110229 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110247 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110263 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110308 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110384 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110414 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110487 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.110831 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.111803 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.120289 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.121278 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.127249 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.134530 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.145937 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.152170 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.170260 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.187916 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212248 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212325 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212372 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212394 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212432 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212480 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.212684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.213172 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.213380 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.217624 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.217897 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.240508 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"swift-ring-rebalance-lfvg7\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.280305 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.625342 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:30 crc kubenswrapper[4763]: W0131 15:13:30.629841 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd7b0a2_4f68_44bc_8720_1dcb2d975beb.slice/crio-90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2 WatchSource:0}: Error finding container 90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2: Status 404 returned error can't find the container with id 90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2 Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.698302 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2"} Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.710673 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:30 crc kubenswrapper[4763]: W0131 15:13:30.751943 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92920ef2_27a4_47ea_b8f0_220dc84853e4.slice/crio-bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29 WatchSource:0}: Error finding container bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29: Status 404 returned error can't find the container with id bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29 Jan 31 15:13:30 crc kubenswrapper[4763]: I0131 15:13:30.758279 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.060560 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30af61f1-3271-4c8a-9da4-44fd302b135b" path="/var/lib/kubelet/pods/30af61f1-3271-4c8a-9da4-44fd302b135b/volumes" Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708854 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708874 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.708883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"2e0d604e834eeb3cec78cfb460add37121eda53b4eea073ea361954901be3261"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.710583 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerStarted","Data":"6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.710607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerStarted","Data":"bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.713982 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714116 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714196 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.714262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150"} Jan 31 15:13:31 crc kubenswrapper[4763]: I0131 15:13:31.732034 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" podStartSLOduration=2.7320163109999998 podStartE2EDuration="2.732016311s" podCreationTimestamp="2026-01-31 15:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:31.724548244 +0000 UTC m=+1131.479286537" watchObservedRunningTime="2026-01-31 15:13:31.732016311 +0000 UTC m=+1131.486754614" Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784056 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784106 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.784114 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806887 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806932 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7"} Jan 31 15:13:32 crc kubenswrapper[4763]: I0131 15:13:32.806945 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843148 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843565 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843582 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.843594 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863203 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863299 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863341 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863354 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc"} Jan 31 15:13:33 crc kubenswrapper[4763]: I0131 15:13:33.863365 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.879795 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.879846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerStarted","Data":"0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.889869 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.889927 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerStarted","Data":"a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da"} Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.934316 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=6.934295859 podStartE2EDuration="6.934295859s" podCreationTimestamp="2026-01-31 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:34.929203834 +0000 UTC m=+1134.683942137" watchObservedRunningTime="2026-01-31 15:13:34.934295859 +0000 UTC m=+1134.689034152" Jan 31 15:13:34 crc kubenswrapper[4763]: I0131 15:13:34.984996 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=6.984969588 podStartE2EDuration="6.984969588s" podCreationTimestamp="2026-01-31 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:13:34.978139888 +0000 UTC m=+1134.732878181" watchObservedRunningTime="2026-01-31 15:13:34.984969588 +0000 UTC m=+1134.739707901" Jan 31 15:13:39 crc kubenswrapper[4763]: I0131 15:13:39.940457 4763 generic.go:334] "Generic (PLEG): container finished" podID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerID="6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1" exitCode=0 Jan 31 15:13:39 crc kubenswrapper[4763]: I0131 15:13:39.940556 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerDied","Data":"6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1"} Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.241352 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.283876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.283953 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284040 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284134 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284166 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") pod \"92920ef2-27a4-47ea-b8f0-220dc84853e4\" (UID: \"92920ef2-27a4-47ea-b8f0-220dc84853e4\") " Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284739 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.284918 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.289291 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855" (OuterVolumeSpecName: "kube-api-access-p4855") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "kube-api-access-p4855". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.305802 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.306533 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.320797 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts" (OuterVolumeSpecName: "scripts") pod "92920ef2-27a4-47ea-b8f0-220dc84853e4" (UID: "92920ef2-27a4-47ea-b8f0-220dc84853e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386302 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386371 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4855\" (UniqueName: \"kubernetes.io/projected/92920ef2-27a4-47ea-b8f0-220dc84853e4-kube-api-access-p4855\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386394 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92920ef2-27a4-47ea-b8f0-220dc84853e4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386412 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386646 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92920ef2-27a4-47ea-b8f0-220dc84853e4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.386668 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92920ef2-27a4-47ea-b8f0-220dc84853e4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" event={"ID":"92920ef2-27a4-47ea-b8f0-220dc84853e4","Type":"ContainerDied","Data":"bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29"} Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962236 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad397da8bdce7d609b2654986821fb7c56faedd4c3912746b4a431f20c68d29" Jan 31 15:13:41 crc kubenswrapper[4763]: I0131 15:13:41.962265 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-lfvg7" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.297960 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:42 crc kubenswrapper[4763]: E0131 15:13:42.298325 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.298340 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.298513 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" containerName="swift-ring-rebalance" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.299076 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.302090 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.304122 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.310864 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401232 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401278 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401310 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401342 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401371 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.401418 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502334 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502601 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502638 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502674 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502726 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.502774 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503100 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503320 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.503941 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.505731 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.506407 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.521294 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"swift-ring-rebalance-debug-kpdsv\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:42 crc kubenswrapper[4763]: I0131 15:13:42.618526 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.080548 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980254 4763 generic.go:334] "Generic (PLEG): container finished" podID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerID="d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37" exitCode=0 Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980320 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" event={"ID":"8eed4c21-330f-4e87-ab2e-12aed0685331","Type":"ContainerDied","Data":"d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37"} Jan 31 15:13:43 crc kubenswrapper[4763]: I0131 15:13:43.980514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" event={"ID":"8eed4c21-330f-4e87-ab2e-12aed0685331","Type":"ContainerStarted","Data":"f053bd1214d0bded5c5fac6cbf980f79b07eb3facacc192c4df673d6ea33cecf"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.026541 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.039109 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv"] Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178485 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178591 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.178673 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.179778 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.179907 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" gracePeriod=600 Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993211 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" exitCode=0 Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993298 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993613 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} Jan 31 15:13:44 crc kubenswrapper[4763]: I0131 15:13:44.993645 4763 scope.go:117] "RemoveContainer" containerID="b85e19e54b5edf80a14f0fece0b4788a8867e6919fb49643606ef0f2d6f8bd3e" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.323152 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453270 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453342 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453380 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453496 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453610 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.453659 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") pod \"8eed4c21-330f-4e87-ab2e-12aed0685331\" (UID: \"8eed4c21-330f-4e87-ab2e-12aed0685331\") " Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.455167 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.455970 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.456632 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:45 crc kubenswrapper[4763]: E0131 15:13:45.457041 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457054 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457225 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" containerName="swift-ring-rebalance" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.457642 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.460903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn" (OuterVolumeSpecName: "kube-api-access-b47dn") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "kube-api-access-b47dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.479677 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts" (OuterVolumeSpecName: "scripts") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.480506 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.483587 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.517838 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8eed4c21-330f-4e87-ab2e-12aed0685331" (UID: "8eed4c21-330f-4e87-ab2e-12aed0685331"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555635 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555711 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555776 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555809 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555863 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555929 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.555990 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556006 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556018 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8eed4c21-330f-4e87-ab2e-12aed0685331-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556029 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b47dn\" (UniqueName: \"kubernetes.io/projected/8eed4c21-330f-4e87-ab2e-12aed0685331-kube-api-access-b47dn\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556042 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8eed4c21-330f-4e87-ab2e-12aed0685331-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.556056 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8eed4c21-330f-4e87-ab2e-12aed0685331-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657398 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657493 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657567 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657642 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657729 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.657767 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.658144 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.658595 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.659059 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.664279 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.664406 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.678255 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"swift-ring-rebalance-debug-cvgd6\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:45 crc kubenswrapper[4763]: I0131 15:13:45.872519 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.003895 4763 scope.go:117] "RemoveContainer" containerID="d277556494f9e3227af7edd989883de9cb25b67323d2c4aae46e1055b092bb37" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.003973 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-kpdsv" Jan 31 15:13:46 crc kubenswrapper[4763]: I0131 15:13:46.330826 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:46 crc kubenswrapper[4763]: E0131 15:13:46.955343 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754727d0_e275_4404_8805_af884fac0750.slice/crio-conmon-e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021676 4763 generic.go:334] "Generic (PLEG): container finished" podID="754727d0-e275-4404-8805-af884fac0750" containerID="e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c" exitCode=0 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021751 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" event={"ID":"754727d0-e275-4404-8805-af884fac0750","Type":"ContainerDied","Data":"e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c"} Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.021799 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" event={"ID":"754727d0-e275-4404-8805-af884fac0750","Type":"ContainerStarted","Data":"20661d33d92f81fd7d6df3311e0f2533bfee4b0a999a9a78271c93e6e7e6e54a"} Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.063448 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eed4c21-330f-4e87-ab2e-12aed0685331" path="/var/lib/kubelet/pods/8eed4c21-330f-4e87-ab2e-12aed0685331/volumes" Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.079773 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.083616 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.189594 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190163 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" containerID="cri-o://e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190202 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" containerID="cri-o://c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190306 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" containerID="cri-o://79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190359 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" containerID="cri-o://f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190425 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" containerID="cri-o://bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190412 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" containerID="cri-o://8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190472 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" containerID="cri-o://c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190526 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" containerID="cri-o://3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190425 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" containerID="cri-o://758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190453 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" containerID="cri-o://c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190554 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" containerID="cri-o://26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190615 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" containerID="cri-o://9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190631 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" containerID="cri-o://79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190669 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" containerID="cri-o://f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.190677 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" containerID="cri-o://e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.203254 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204034 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" containerID="cri-o://afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204170 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" containerID="cri-o://a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204226 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" containerID="cri-o://2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204264 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" containerID="cri-o://bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204301 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" containerID="cri-o://11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204338 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" containerID="cri-o://53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204375 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" containerID="cri-o://44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204438 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" containerID="cri-o://900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204479 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" containerID="cri-o://b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204517 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" containerID="cri-o://65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204553 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" containerID="cri-o://62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204573 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" containerID="cri-o://b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204594 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" containerID="cri-o://1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204635 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" containerID="cri-o://c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.204684 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" containerID="cri-o://db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.219616 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220218 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" containerID="cri-o://9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220602 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" containerID="cri-o://0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220663 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" containerID="cri-o://fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220734 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" containerID="cri-o://97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220782 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" containerID="cri-o://02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" containerID="cri-o://82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220862 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" containerID="cri-o://e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220902 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" containerID="cri-o://772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220942 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" containerID="cri-o://f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.220979 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" containerID="cri-o://87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221020 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" containerID="cri-o://46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221060 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" containerID="cri-o://b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221115 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" containerID="cri-o://db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221168 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" containerID="cri-o://95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.221219 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" containerID="cri-o://43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.231026 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.236301 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-lfvg7"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.283512 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.284033 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" containerID="cri-o://6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.284235 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" containerID="cri-o://604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" gracePeriod=30 Jan 31 15:13:47 crc kubenswrapper[4763]: I0131 15:13:47.957252 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013294 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013372 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013400 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.013486 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") pod \"d952a9d1-9446-4003-b83c-9603f44fb634\" (UID: \"d952a9d1-9446-4003-b83c-9603f44fb634\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.014236 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.014306 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.022489 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2" (OuterVolumeSpecName: "kube-api-access-gfjq2") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "kube-api-access-gfjq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.025683 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049564 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049595 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049603 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049610 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049618 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049625 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049632 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049638 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049644 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049651 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049657 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049664 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049671 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049677 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049729 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049753 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049766 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049777 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049789 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049825 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049835 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049846 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049858 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.049884 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051329 4763 generic.go:334] "Generic (PLEG): container finished" podID="d952a9d1-9446-4003-b83c-9603f44fb634" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051346 4763 generic.go:334] "Generic (PLEG): container finished" podID="d952a9d1-9446-4003-b83c-9603f44fb634" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051372 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051386 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051395 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" event={"ID":"d952a9d1-9446-4003-b83c-9603f44fb634","Type":"ContainerDied","Data":"c5b9e8169bbdced4c4b198da2aa6a001331670d45c8a28e822df8ec98cfb315a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051409 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051514 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.051744 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data" (OuterVolumeSpecName: "config-data") pod "d952a9d1-9446-4003-b83c-9603f44fb634" (UID: "d952a9d1-9446-4003-b83c-9603f44fb634"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057053 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057081 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057088 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057095 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057102 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057108 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057114 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057120 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057126 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057132 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057138 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057145 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057151 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057157 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057201 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057228 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057241 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057251 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057273 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057281 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057289 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057300 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057308 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057315 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057323 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057331 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.057339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070492 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070532 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070539 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070548 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070555 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070564 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070570 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070578 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070584 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070590 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070597 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070604 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070610 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070617 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" exitCode=0 Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070580 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070675 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070685 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070717 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070727 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070739 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070747 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070756 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070772 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070811 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.070818 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd"} Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.079187 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.102511 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: E0131 15:13:48.103050 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103102 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} err="failed to get container status \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103129 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: E0131 15:13:48.103658 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103688 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} err="failed to get container status \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.103721 4763 scope.go:117] "RemoveContainer" containerID="604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104022 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7"} err="failed to get container status \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": rpc error: code = NotFound desc = could not find container \"604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7\": container with ID starting with 604fd98d9f005c656c4e220b48cc698ee2b749f79101d5b649810f13620df6c7 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104060 4763 scope.go:117] "RemoveContainer" containerID="6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.104397 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271"} err="failed to get container status \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": rpc error: code = NotFound desc = could not find container \"6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271\": container with ID starting with 6563b545667e949b8a568e3e8a66b612c2410e644e5db0a715da2ee5664c5271 not found: ID does not exist" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115448 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjq2\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-kube-api-access-gfjq2\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115480 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d952a9d1-9446-4003-b83c-9603f44fb634-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115491 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115500 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d952a9d1-9446-4003-b83c-9603f44fb634-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.115507 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d952a9d1-9446-4003-b83c-9603f44fb634-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.242758 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.317915 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.317973 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318001 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318054 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318140 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318158 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") pod \"754727d0-e275-4404-8805-af884fac0750\" (UID: \"754727d0-e275-4404-8805-af884fac0750\") " Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.318568 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.319032 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.323227 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26" (OuterVolumeSpecName: "kube-api-access-rfn26") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "kube-api-access-rfn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.333650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts" (OuterVolumeSpecName: "scripts") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.335918 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.336898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "754727d0-e275-4404-8805-af884fac0750" (UID: "754727d0-e275-4404-8805-af884fac0750"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420031 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420064 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/754727d0-e275-4404-8805-af884fac0750-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420077 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420089 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/754727d0-e275-4404-8805-af884fac0750-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420100 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/754727d0-e275-4404-8805-af884fac0750-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.420111 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfn26\" (UniqueName: \"kubernetes.io/projected/754727d0-e275-4404-8805-af884fac0750-kube-api-access-rfn26\") on node \"crc\" DevicePath \"\"" Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.434906 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:48 crc kubenswrapper[4763]: I0131 15:13:48.444584 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-qp8rk"] Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.059963 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754727d0-e275-4404-8805-af884fac0750" path="/var/lib/kubelet/pods/754727d0-e275-4404-8805-af884fac0750/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.061049 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92920ef2-27a4-47ea-b8f0-220dc84853e4" path="/var/lib/kubelet/pods/92920ef2-27a4-47ea-b8f0-220dc84853e4/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.061758 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" path="/var/lib/kubelet/pods/d952a9d1-9446-4003-b83c-9603f44fb634/volumes" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.089754 4763 scope.go:117] "RemoveContainer" containerID="e321792cb928347235568231c246985837d0fc98a70e7f41e43b1fee17043a7c" Jan 31 15:13:49 crc kubenswrapper[4763]: I0131 15:13:49.089817 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-cvgd6" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.395282 4763 generic.go:334] "Generic (PLEG): container finished" podID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerID="c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.395455 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186"} Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.405499 4763 generic.go:334] "Generic (PLEG): container finished" podID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerID="a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.405562 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da"} Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.424164 4763 generic.go:334] "Generic (PLEG): container finished" podID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerID="0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" exitCode=137 Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.424208 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6"} Jan 31 15:14:17 crc kubenswrapper[4763]: E0131 15:14:17.532909 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d83c163_ac13_4c3c_82cb_da30bdb664d4.slice/crio-conmon-0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd7b0a2_4f68_44bc_8720_1dcb2d975beb.slice/crio-conmon-a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.602609 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.740509 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.745283 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.773981 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774045 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774095 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774166 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774187 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") pod \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\" (UID: \"0d83c163-ac13-4c3c-82cb-da30bdb664d4\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.774815 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock" (OuterVolumeSpecName: "lock") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.775146 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache" (OuterVolumeSpecName: "cache") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779778 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779783 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl" (OuterVolumeSpecName: "kube-api-access-5twbl") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "kube-api-access-5twbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.779848 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0d83c163-ac13-4c3c-82cb-da30bdb664d4" (UID: "0d83c163-ac13-4c3c-82cb-da30bdb664d4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876095 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876146 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876197 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876259 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876347 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876690 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876815 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") pod \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\" (UID: \"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876855 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.876920 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\" (UID: \"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf\") " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877050 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock" (OuterVolumeSpecName: "lock") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache" (OuterVolumeSpecName: "cache") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache" (OuterVolumeSpecName: "cache") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.877869 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock" (OuterVolumeSpecName: "lock") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878111 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878145 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878172 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0d83c163-ac13-4c3c-82cb-da30bdb664d4-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878198 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878225 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878274 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878302 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878326 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.878354 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5twbl\" (UniqueName: \"kubernetes.io/projected/0d83c163-ac13-4c3c-82cb-da30bdb664d4-kube-api-access-5twbl\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.880035 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.880162 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx" (OuterVolumeSpecName: "kube-api-access-mjgnx") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "kube-api-access-mjgnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.881115 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.881739 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.882943 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q" (OuterVolumeSpecName: "kube-api-access-fzq5q") pod "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" (UID: "6c4f98e3-1507-44ae-9eb7-2247ab0e37bf"). InnerVolumeSpecName "kube-api-access-fzq5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.883052 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" (UID: "0fd7b0a2-4f68-44bc-8720-1dcb2d975beb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.904276 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979659 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979735 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979812 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979840 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979859 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjgnx\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-kube-api-access-mjgnx\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979876 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzq5q\" (UniqueName: \"kubernetes.io/projected/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf-kube-api-access-fzq5q\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.979891 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:17 crc kubenswrapper[4763]: I0131 15:14:17.995001 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.007472 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.082560 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.082616 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.446804 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"0d83c163-ac13-4c3c-82cb-da30bdb664d4","Type":"ContainerDied","Data":"2e0d604e834eeb3cec78cfb460add37121eda53b4eea073ea361954901be3261"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.447294 4763 scope.go:117] "RemoveContainer" containerID="0faa735c202a6e16bf9b9e8948e5792bcc5154cd3f56ada820b906ae813092c6" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.446926 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.463339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"6c4f98e3-1507-44ae-9eb7-2247ab0e37bf","Type":"ContainerDied","Data":"fc755e76f946cbf270f23b7d66505db67f9f56fbc2a1281c6fec2037e341ce02"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.463493 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.491077 4763 scope.go:117] "RemoveContainer" containerID="fc85ed3fbad7523183d7b9b76e6d1c6a4d902df91667936a1b1b3eb39ced033a" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.511621 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.515109 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"0fd7b0a2-4f68-44bc-8720-1dcb2d975beb","Type":"ContainerDied","Data":"90e93209198d21ecb365141d71e0805661beff6af7f209b3c4513e9211db45f2"} Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.515246 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.532749 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.546649 4763 scope.go:117] "RemoveContainer" containerID="97469e71426efac22b111ee22106730f06552f0b2b6a901f3e3e8bc6c68198dc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.549214 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.560561 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.572091 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.576043 4763 scope.go:117] "RemoveContainer" containerID="02a531233c8eff48d67d9eac0b9ddc4fec244753f6b3bfb9b21057bdb7be4c7c" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.582669 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.592851 4763 scope.go:117] "RemoveContainer" containerID="82d538c669bae5f7f95077ed345669166fe3742501df486afd6361232d21fcc5" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.611160 4763 scope.go:117] "RemoveContainer" containerID="e4b43e4f4b97a5362dd61572955b58929d58cd939580340917dd6856eee48c51" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.625663 4763 scope.go:117] "RemoveContainer" containerID="772b6f037f17f533f5cff02df3680cef890ca71af7aadc9080514f6990cfd4dc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.641561 4763 scope.go:117] "RemoveContainer" containerID="f4ab4db4c5d9dcb6ccd57b84ea36065134a3bd7c3fff7268ce34ac3b6e1ba0ee" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.654853 4763 scope.go:117] "RemoveContainer" containerID="87fd27d178e3710cca6b6fe084307cf387592c330affd1b08797d73b27da0729" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.672720 4763 scope.go:117] "RemoveContainer" containerID="46dfcdcd0be805e9a45249b9185005e90a2633a0552782f7c25619de9a9e01d7" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.686510 4763 scope.go:117] "RemoveContainer" containerID="b145444ea56aafdcef5964b190965b77165b7812022b77df5b6a7fb787ea1f6b" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.700719 4763 scope.go:117] "RemoveContainer" containerID="db9e135b4f60c10dc082344ade61ed089817f4f5bb60cf75e034bd8fb673be95" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.726439 4763 scope.go:117] "RemoveContainer" containerID="95874f605a14427c61a1fe2f12f189c9c62e0310731ed8b97d4aa745deb8470a" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.754904 4763 scope.go:117] "RemoveContainer" containerID="43761f858930e3361c93e26c546d75f9049fbeb0aed2f837e6aa6e57845c2002" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.828877 4763 scope.go:117] "RemoveContainer" containerID="9ec12f6ac4fb2fe1a0eaa6ae49e4dc13f98f460950b7abbd35ef0d14386f1613" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.851852 4763 scope.go:117] "RemoveContainer" containerID="c92a67d175e7b2e0d2f4e6d03791a25078677c370b55cc124a64ac4580b1a186" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.867411 4763 scope.go:117] "RemoveContainer" containerID="c27b98be7532909d83c273cb34dc9fd9b883ddc36da5b9f8e235750c5110926b" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.882067 4763 scope.go:117] "RemoveContainer" containerID="758ecdf114748a274e883c4dfb0b65ebf29c160068e2fb6ba6a6ba341b7ad8cc" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.895958 4763 scope.go:117] "RemoveContainer" containerID="3e3a3d2095bc51e457a624c288843b1229f6edd0c9886caeecc8ae8c8fc0b366" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.909651 4763 scope.go:117] "RemoveContainer" containerID="9953487bd5a77e9cae7330ff25f691cf98544a779a3aaffa22289f618b2ab84e" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.925525 4763 scope.go:117] "RemoveContainer" containerID="f8537cd557c893811f4771b5d344fb7198348c938522cc5297a57e159c08bb1d" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.939129 4763 scope.go:117] "RemoveContainer" containerID="c90149cab3e82c3b3aa0479a8bfbfea30e029a20dbf49c2af33d575a0e5ad411" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.960780 4763 scope.go:117] "RemoveContainer" containerID="79013075c8cee2c124b6f705985ce81a282712a5c1945c3a1ae8d6d583285d31" Jan 31 15:14:18 crc kubenswrapper[4763]: I0131 15:14:18.982312 4763 scope.go:117] "RemoveContainer" containerID="f9c131e99fc59700893d4fbe2f7af6bf69bbf3f78f285398911d18a20c8ec6ea" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.002079 4763 scope.go:117] "RemoveContainer" containerID="79c3d3910a7c0f333be18e5912411834c0bc3275953b7a51d75d52ac387ddcaa" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.017886 4763 scope.go:117] "RemoveContainer" containerID="e6a2d028a5029018d31ffce2a967e9d0ddb51a1a19145baba94de8eb996b72f0" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.031295 4763 scope.go:117] "RemoveContainer" containerID="bc321d01362d270731a7e972d44081e349a9ee7b3efcaf30f8babf36901ecf1e" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.046490 4763 scope.go:117] "RemoveContainer" containerID="8c3af34ffb6d8a6328c95ad20cd01fd80df1d4aa3cf85710910e19ed4a069343" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.051427 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" path="/var/lib/kubelet/pods/0d83c163-ac13-4c3c-82cb-da30bdb664d4/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.053237 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" path="/var/lib/kubelet/pods/0fd7b0a2-4f68-44bc-8720-1dcb2d975beb/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.055338 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" path="/var/lib/kubelet/pods/6c4f98e3-1507-44ae-9eb7-2247ab0e37bf/volumes" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.062615 4763 scope.go:117] "RemoveContainer" containerID="26d18a07f1ba356bb7df9ab96b968aba9c5d64937ec37a59fae70520b88066d5" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.086863 4763 scope.go:117] "RemoveContainer" containerID="e8f41f27d9646e9b7f047f60ad0964f6a5ebd721b98bf742430a0e9d49f019dd" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.114424 4763 scope.go:117] "RemoveContainer" containerID="a0512f3a95ac09e16cf86ec9e8bfe2bd51334b12e32f11edcc9924d32d0317da" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.134365 4763 scope.go:117] "RemoveContainer" containerID="2801ebc5b5111a3f354bc751c7f4df5a35e9e0f9f8c1b8f97dd7437b72bd0333" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.155088 4763 scope.go:117] "RemoveContainer" containerID="bcd8336fa8422ff8f4710866669ead2747397c3d7e3479845eca695012773964" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.179361 4763 scope.go:117] "RemoveContainer" containerID="11ca22dc3aea18b7002ee209c33ea918f1a6a74de6a87b3e0282c7b043144034" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.194896 4763 scope.go:117] "RemoveContainer" containerID="53cc7d03295052ed760abf6e7bf461dbb5c915e085d4733f09f4b88dee41fce6" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.227535 4763 scope.go:117] "RemoveContainer" containerID="44221e09a6221477f521497096958975eb2d81c909e9df7abcc70e7a6affe606" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.244570 4763 scope.go:117] "RemoveContainer" containerID="900954e8a49f5e385b29971b0076eb09818fee16df406d5bda2ffa32153b8f2c" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.258197 4763 scope.go:117] "RemoveContainer" containerID="b347ad616d945afabba996ef7b64761509d97dec2de5dbce22818ebd309bb2fd" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.281082 4763 scope.go:117] "RemoveContainer" containerID="65af4eadceb3880bfd8c0ae7c91be9262b951343a51ea6f4b387d7805c18a203" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.295563 4763 scope.go:117] "RemoveContainer" containerID="62e05d6e34f51a24fb36847b0e659c0ded9040b7a9df03f170ac2888f520a2f7" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.308029 4763 scope.go:117] "RemoveContainer" containerID="c1724ae1efdd76da3683f41ca56bb7dec66988978a2f0f763be0b355c56b24d4" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.323005 4763 scope.go:117] "RemoveContainer" containerID="db2e802c7fb1380e7d58552715a2bbc320cf91d8405eb9d9f94c80688bdaa86e" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.338522 4763 scope.go:117] "RemoveContainer" containerID="b66057b270aafdfd00402f417700789d4eb2a09f6b95a3210ba44757a3173fb1" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.352116 4763 scope.go:117] "RemoveContainer" containerID="1162fdbabfecb0f41f85a7d666a0cc25c2eb72256d57c4d4b10c1e0aa3ad58c6" Jan 31 15:14:19 crc kubenswrapper[4763]: I0131 15:14:19.367295 4763 scope.go:117] "RemoveContainer" containerID="afa6a3f7b8649103325ad5315e9dc80fef7ee37028bfc085ed5215505185d150" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.468672 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469503 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469524 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469542 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469555 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469578 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469593 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469610 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469622 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469641 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469652 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469674 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469686 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469746 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469761 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469779 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469792 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469809 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469821 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469842 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469854 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469870 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469882 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469902 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469914 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469928 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469940 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469958 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.469970 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.469992 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470003 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470024 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470037 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470055 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470068 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470086 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470098 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470119 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470131 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470144 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470155 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470174 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470189 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470204 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470216 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470232 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470244 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470265 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470277 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470296 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470308 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470320 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470332 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470353 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470364 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470388 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470415 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470428 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470447 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470458 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470479 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470490 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470507 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470518 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470532 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470544 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470564 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470577 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470593 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470604 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470623 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470635 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470653 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470665 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470688 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470729 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470749 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470765 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470791 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470807 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470831 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470847 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470871 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470888 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470912 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.470924 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.470938 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471332 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471358 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471370 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471388 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471400 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471419 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471431 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.471452 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471464 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471737 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471756 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471771 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471790 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471808 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471820 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471842 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471856 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471873 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471889 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471904 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471915 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471934 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471949 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471964 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.471987 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472012 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472038 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-httpd" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472063 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472083 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472099 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472134 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d952a9d1-9446-4003-b83c-9603f44fb634" containerName="proxy-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472150 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472193 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472211 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472245 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472260 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472276 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472293 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472307 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472324 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472336 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-updater" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472352 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="rsync" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472370 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="swift-recon-cron" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472383 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="object-expirer" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472400 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472411 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="account-reaper" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472426 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472440 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="container-replicator" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472454 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472467 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd7b0a2-4f68-44bc-8720-1dcb2d975beb" containerName="container-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472482 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472497 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472511 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="754727d0-e275-4404-8805-af884fac0750" containerName="swift-ring-rebalance" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472525 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d83c163-ac13-4c3c-82cb-da30bdb664d4" containerName="container-server" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.472540 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4f98e3-1507-44ae-9eb7-2247ab0e37bf" containerName="object-auditor" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.480443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.483052 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-nv564" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.486772 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.486980 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.488675 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.503155 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634497 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634528 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634558 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634622 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.634653 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736663 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736765 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736859 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736889 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.736924 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737306 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737394 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737441 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: E0131 15:14:21.737560 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:22.237540842 +0000 UTC m=+1181.992279225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.737842 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.759935 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:21 crc kubenswrapper[4763]: I0131 15:14:21.768741 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:22 crc kubenswrapper[4763]: I0131 15:14:22.244028 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244263 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244298 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:22 crc kubenswrapper[4763]: E0131 15:14:22.244395 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:23.244357124 +0000 UTC m=+1182.999095447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: I0131 15:14:23.258870 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259195 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259240 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:23 crc kubenswrapper[4763]: E0131 15:14:23.259346 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:25.259312643 +0000 UTC m=+1185.014050986 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.290768 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291062 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291447 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: E0131 15:14:25.291543 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:29.29151125 +0000 UTC m=+1189.046249583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.354986 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.355999 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.357939 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.358347 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.358719 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.378256 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.493781 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.493832 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494043 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494131 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494184 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.494244 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.595865 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596266 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596446 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596725 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596858 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.596872 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597170 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597369 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.597391 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.601516 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.606285 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.621759 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"swift-ring-rebalance-6fg48\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.685078 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:25 crc kubenswrapper[4763]: I0131 15:14:25.914170 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.621816 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerStarted","Data":"203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993"} Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.621883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerStarted","Data":"49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6"} Jan 31 15:14:26 crc kubenswrapper[4763]: I0131 15:14:26.663401 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" podStartSLOduration=1.66336668 podStartE2EDuration="1.66336668s" podCreationTimestamp="2026-01-31 15:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:26.651376105 +0000 UTC m=+1186.406114398" watchObservedRunningTime="2026-01-31 15:14:26.66336668 +0000 UTC m=+1186.418105013" Jan 31 15:14:29 crc kubenswrapper[4763]: I0131 15:14:29.362183 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362435 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362480 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:14:29 crc kubenswrapper[4763]: E0131 15:14:29.362576 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift podName:0f637cde-45f1-4c1e-b345-7f89e17eccc6 nodeName:}" failed. No retries permitted until 2026-01-31 15:14:37.362541369 +0000 UTC m=+1197.117279692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift") pod "swift-storage-0" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6") : configmap "swift-ring-files" not found Jan 31 15:14:32 crc kubenswrapper[4763]: I0131 15:14:32.671174 4763 generic.go:334] "Generic (PLEG): container finished" podID="52d85cde-9969-46d1-9e16-44c5747493cc" containerID="203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993" exitCode=0 Jan 31 15:14:32 crc kubenswrapper[4763]: I0131 15:14:32.671238 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerDied","Data":"203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993"} Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.010053 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133379 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133465 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133606 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133651 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133691 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.133804 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") pod \"52d85cde-9969-46d1-9e16-44c5747493cc\" (UID: \"52d85cde-9969-46d1-9e16-44c5747493cc\") " Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.134295 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.134643 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.140272 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf" (OuterVolumeSpecName: "kube-api-access-gqjcf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "kube-api-access-gqjcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.144101 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.158196 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.163734 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts" (OuterVolumeSpecName: "scripts") pod "52d85cde-9969-46d1-9e16-44c5747493cc" (UID: "52d85cde-9969-46d1-9e16-44c5747493cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.235725 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52d85cde-9969-46d1-9e16-44c5747493cc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236005 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236014 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52d85cde-9969-46d1-9e16-44c5747493cc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236024 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236033 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52d85cde-9969-46d1-9e16-44c5747493cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.236044 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqjcf\" (UniqueName: \"kubernetes.io/projected/52d85cde-9969-46d1-9e16-44c5747493cc-kube-api-access-gqjcf\") on node \"crc\" DevicePath \"\"" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690563 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" event={"ID":"52d85cde-9969-46d1-9e16-44c5747493cc","Type":"ContainerDied","Data":"49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6"} Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690890 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49906396a5fab4b0e67ab645b9ec1480523b83bf3b1aa1767c5464699ae64df6" Jan 31 15:14:34 crc kubenswrapper[4763]: I0131 15:14:34.690642 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-6fg48" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.383782 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.405010 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"swift-storage-0\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.412161 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:14:37 crc kubenswrapper[4763]: I0131 15:14:37.901647 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725008 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725296 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d"} Jan 31 15:14:38 crc kubenswrapper[4763]: I0131 15:14:38.725320 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"6e2e9e90507c6c3f151001d225cf3ffd2d74fbb47248c3805eb15d884a5abba9"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740377 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740780 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740796 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740805 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740813 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682"} Jan 31 15:14:39 crc kubenswrapper[4763]: I0131 15:14:39.740823 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.753277 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754250 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754361 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754467 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754543 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754646 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0"} Jan 31 15:14:40 crc kubenswrapper[4763]: I0131 15:14:40.754727 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerStarted","Data":"16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f"} Jan 31 15:14:41 crc kubenswrapper[4763]: I0131 15:14:41.801674 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.80165402 podStartE2EDuration="21.80165402s" podCreationTimestamp="2026-01-31 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:41.796405633 +0000 UTC m=+1201.551143926" watchObservedRunningTime="2026-01-31 15:14:41.80165402 +0000 UTC m=+1201.556392313" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.862920 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:46 crc kubenswrapper[4763]: E0131 15:14:46.863962 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.863988 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.864208 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" containerName="swift-ring-rebalance" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.867291 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.874176 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.907547 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.944955 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945067 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945125 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945156 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:46 crc kubenswrapper[4763]: I0131 15:14:46.945217 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046720 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046802 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046873 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046916 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.046963 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.047684 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.047939 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.054680 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.055768 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.068951 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"swift-proxy-59c4d74667-fdrd6\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.228107 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.695918 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:14:47 crc kubenswrapper[4763]: I0131 15:14:47.813664 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"b119bcbbd6a3db6b88bff647faeec6bfa2e07015bb14d46d3cb45d7d5768e7de"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.820502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.820752 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerStarted","Data":"c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850"} Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.821559 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.821580 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:48 crc kubenswrapper[4763]: I0131 15:14:48.844382 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podStartSLOduration=2.844367173 podStartE2EDuration="2.844367173s" podCreationTimestamp="2026-01-31 15:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:48.837141623 +0000 UTC m=+1208.591879916" watchObservedRunningTime="2026-01-31 15:14:48.844367173 +0000 UTC m=+1208.599105466" Jan 31 15:14:57 crc kubenswrapper[4763]: I0131 15:14:57.230819 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:57 crc kubenswrapper[4763]: I0131 15:14:57.231328 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.876505 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.878789 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.882316 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.883658 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:14:58 crc kubenswrapper[4763]: I0131 15:14:58.884762 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030534 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030605 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030631 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030667 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030685 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.030731 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132057 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132121 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132145 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132177 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132194 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.132226 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133129 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133372 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.133748 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.137618 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.137658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.147464 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"swift-ring-rebalance-debug-zkqdq\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.210443 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.615559 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.911595 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerStarted","Data":"dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2"} Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.911644 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerStarted","Data":"873824f34424ae78088f76f80ab4ed2704a83af62bd0bddc461bf62e64ee83f7"} Jan 31 15:14:59 crc kubenswrapper[4763]: I0131 15:14:59.936626 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" podStartSLOduration=1.936593841 podStartE2EDuration="1.936593841s" podCreationTimestamp="2026-01-31 15:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:14:59.93655674 +0000 UTC m=+1219.691295053" watchObservedRunningTime="2026-01-31 15:14:59.936593841 +0000 UTC m=+1219.691332174" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.144773 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.145829 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.147813 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.147948 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.153786 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250105 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250218 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.250313 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.351731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.351835 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.352008 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.352870 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.362192 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.372215 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"collect-profiles-29497875-4wkrh\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.518822 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.749949 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh"] Jan 31 15:15:00 crc kubenswrapper[4763]: W0131 15:15:00.750201 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b6cd4d_3b2e_456c_afec_739df2a5e910.slice/crio-183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3 WatchSource:0}: Error finding container 183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3: Status 404 returned error can't find the container with id 183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3 Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.919885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerStarted","Data":"2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d"} Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.920448 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerStarted","Data":"183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3"} Jan 31 15:15:00 crc kubenswrapper[4763]: I0131 15:15:00.942962 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" podStartSLOduration=0.942938803 podStartE2EDuration="942.938803ms" podCreationTimestamp="2026-01-31 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:00.941866975 +0000 UTC m=+1220.696605288" watchObservedRunningTime="2026-01-31 15:15:00.942938803 +0000 UTC m=+1220.697677096" Jan 31 15:15:01 crc kubenswrapper[4763]: I0131 15:15:01.934288 4763 generic.go:334] "Generic (PLEG): container finished" podID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerID="2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d" exitCode=0 Jan 31 15:15:01 crc kubenswrapper[4763]: I0131 15:15:01.934339 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerDied","Data":"2b0b52b31674e8c05c098868af13d05d526cd753e9a261dde704ad7aa24d9b2d"} Jan 31 15:15:02 crc kubenswrapper[4763]: I0131 15:15:02.947584 4763 generic.go:334] "Generic (PLEG): container finished" podID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerID="dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2" exitCode=0 Jan 31 15:15:02 crc kubenswrapper[4763]: I0131 15:15:02.947689 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" event={"ID":"9fa1aa7a-aa74-4771-a74a-51c73cd37867","Type":"ContainerDied","Data":"dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2"} Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.271909 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.394935 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395023 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395271 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") pod \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\" (UID: \"e7b6cd4d-3b2e-456c-afec-739df2a5e910\") " Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395653 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.395861 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7b6cd4d-3b2e-456c-afec-739df2a5e910-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.401544 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh" (OuterVolumeSpecName: "kube-api-access-59lfh") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "kube-api-access-59lfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.401959 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e7b6cd4d-3b2e-456c-afec-739df2a5e910" (UID: "e7b6cd4d-3b2e-456c-afec-739df2a5e910"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.496571 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7b6cd4d-3b2e-456c-afec-739df2a5e910-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.496610 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59lfh\" (UniqueName: \"kubernetes.io/projected/e7b6cd4d-3b2e-456c-afec-739df2a5e910-kube-api-access-59lfh\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961599 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961616 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-4wkrh" event={"ID":"e7b6cd4d-3b2e-456c-afec-739df2a5e910","Type":"ContainerDied","Data":"183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3"} Jan 31 15:15:03 crc kubenswrapper[4763]: I0131 15:15:03.961669 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183b14b3f0f251597c5eee8bd052ad374fea7ca89988627b65c901cc3a3efbb3" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.347377 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.396896 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.407311 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515013 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515393 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515448 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515511 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515595 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.515654 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") pod \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\" (UID: \"9fa1aa7a-aa74-4771-a74a-51c73cd37867\") " Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.517003 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.517320 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.520859 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b" (OuterVolumeSpecName: "kube-api-access-dvg4b") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "kube-api-access-dvg4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.542257 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.542809 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.548262 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts" (OuterVolumeSpecName: "scripts") pod "9fa1aa7a-aa74-4771-a74a-51c73cd37867" (UID: "9fa1aa7a-aa74-4771-a74a-51c73cd37867"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.589898 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:04 crc kubenswrapper[4763]: E0131 15:15:04.590225 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590239 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: E0131 15:15:04.590254 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590262 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590402 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b6cd4d-3b2e-456c-afec-739df2a5e910" containerName="collect-profiles" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590422 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" containerName="swift-ring-rebalance" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.590953 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.608330 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618372 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9fa1aa7a-aa74-4771-a74a-51c73cd37867-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618414 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618436 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvg4b\" (UniqueName: \"kubernetes.io/projected/9fa1aa7a-aa74-4771-a74a-51c73cd37867-kube-api-access-dvg4b\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618457 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618478 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9fa1aa7a-aa74-4771-a74a-51c73cd37867-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.618496 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9fa1aa7a-aa74-4771-a74a-51c73cd37867-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719645 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719763 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719836 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.719872 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.720018 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.720041 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822172 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822346 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822412 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822538 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822603 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.822935 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.823652 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.823929 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.830648 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.830800 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.838566 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"swift-ring-rebalance-debug-4x4rg\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.924567 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.971245 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873824f34424ae78088f76f80ab4ed2704a83af62bd0bddc461bf62e64ee83f7" Jan 31 15:15:04 crc kubenswrapper[4763]: I0131 15:15:04.971424 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-zkqdq" Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.062554 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa1aa7a-aa74-4771-a74a-51c73cd37867" path="/var/lib/kubelet/pods/9fa1aa7a-aa74-4771-a74a-51c73cd37867/volumes" Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.161156 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:05 crc kubenswrapper[4763]: W0131 15:15:05.165989 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4953b18b_b3bb_490c_8992_eef7307fdd9d.slice/crio-27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d WatchSource:0}: Error finding container 27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d: Status 404 returned error can't find the container with id 27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.982197 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerStarted","Data":"8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3"} Jan 31 15:15:05 crc kubenswrapper[4763]: I0131 15:15:05.982488 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerStarted","Data":"27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d"} Jan 31 15:15:06 crc kubenswrapper[4763]: I0131 15:15:06.995217 4763 generic.go:334] "Generic (PLEG): container finished" podID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerID="8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3" exitCode=0 Jan 31 15:15:06 crc kubenswrapper[4763]: I0131 15:15:06.995330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" event={"ID":"4953b18b-b3bb-490c-8992-eef7307fdd9d","Type":"ContainerDied","Data":"8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3"} Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.293680 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.321215 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.326097 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg"] Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.475733 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476042 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476072 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476159 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476179 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.476248 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") pod \"4953b18b-b3bb-490c-8992-eef7307fdd9d\" (UID: \"4953b18b-b3bb-490c-8992-eef7307fdd9d\") " Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.477934 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.478277 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.484828 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl" (OuterVolumeSpecName: "kube-api-access-nqxgl") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "kube-api-access-nqxgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.496103 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.498142 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.500050 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts" (OuterVolumeSpecName: "scripts") pod "4953b18b-b3bb-490c-8992-eef7307fdd9d" (UID: "4953b18b-b3bb-490c-8992-eef7307fdd9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.577919 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578207 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4953b18b-b3bb-490c-8992-eef7307fdd9d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578284 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4953b18b-b3bb-490c-8992-eef7307fdd9d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578342 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578397 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4953b18b-b3bb-490c-8992-eef7307fdd9d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:08 crc kubenswrapper[4763]: I0131 15:15:08.578459 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqxgl\" (UniqueName: \"kubernetes.io/projected/4953b18b-b3bb-490c-8992-eef7307fdd9d-kube-api-access-nqxgl\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.019729 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ad12e9d73808eeb265a782edb8eb6db4ef1efbb117fc542c3862e2d28c7f3d" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.019835 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-4x4rg" Jan 31 15:15:09 crc kubenswrapper[4763]: I0131 15:15:09.053963 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" path="/var/lib/kubelet/pods/4953b18b-b3bb-490c-8992-eef7307fdd9d/volumes" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.967586 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:10 crc kubenswrapper[4763]: E0131 15:15:10.968283 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.968326 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.968521 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="4953b18b-b3bb-490c-8992-eef7307fdd9d" containerName="swift-ring-rebalance" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.969269 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.972193 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.972749 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:15:10 crc kubenswrapper[4763]: I0131 15:15:10.983858 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118052 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118100 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118123 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118167 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118194 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.118493 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.219881 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.219992 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220148 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220267 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220321 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.220409 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.221541 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.221770 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.222245 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.228121 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.229042 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.253749 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"swift-ring-rebalance-debug-vlp8r\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.301417 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:11 crc kubenswrapper[4763]: I0131 15:15:11.813023 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:11 crc kubenswrapper[4763]: W0131 15:15:11.820675 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e462269_9089_4bbd_a58c_1b0667972de9.slice/crio-e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a WatchSource:0}: Error finding container e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a: Status 404 returned error can't find the container with id e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a Jan 31 15:15:12 crc kubenswrapper[4763]: I0131 15:15:12.050832 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" event={"ID":"9e462269-9089-4bbd-a58c-1b0667972de9","Type":"ContainerStarted","Data":"e69934d7b524a70aa6f64ea6649babfe469e02cb0e6edef1ab0d95dadff6436a"} Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.071925 4763 generic.go:334] "Generic (PLEG): container finished" podID="9e462269-9089-4bbd-a58c-1b0667972de9" containerID="df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402" exitCode=0 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.071999 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" event={"ID":"9e462269-9089-4bbd-a58c-1b0667972de9","Type":"ContainerDied","Data":"df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402"} Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.124019 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.132291 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237300 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237783 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" containerID="cri-o://16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237830 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" containerID="cri-o://eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237778 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" containerID="cri-o://9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237912 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" containerID="cri-o://ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237942 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" containerID="cri-o://38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.237968 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" containerID="cri-o://7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238006 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" containerID="cri-o://0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238048 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" containerID="cri-o://485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238100 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" containerID="cri-o://42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238138 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" containerID="cri-o://ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" containerID="cri-o://9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238258 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" containerID="cri-o://5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238247 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" containerID="cri-o://66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238309 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" containerID="cri-o://f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238292 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" containerID="cri-o://370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.238375 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" containerID="cri-o://6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.265681 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.282401 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-6fg48"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294040 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294286 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" containerID="cri-o://c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" gracePeriod=30 Jan 31 15:15:13 crc kubenswrapper[4763]: I0131 15:15:13.294594 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" containerID="cri-o://7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" gracePeriod=30 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.081619 4763 generic.go:334] "Generic (PLEG): container finished" podID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerID="7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.082811 4763 generic.go:334] "Generic (PLEG): container finished" podID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerID="c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.081689 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.082947 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091856 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091897 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091913 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091929 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091957 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091974 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091987 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092000 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092016 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.091919 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092088 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092105 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092120 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092028 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092176 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092205 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092136 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092268 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092296 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092317 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092221 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092362 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092381 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" exitCode=0 Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092336 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092467 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092482 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092494 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092506 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.092518 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d"} Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.527685 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683175 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683536 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683584 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683627 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683649 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.683678 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") pod \"9e462269-9089-4bbd-a58c-1b0667972de9\" (UID: \"9e462269-9089-4bbd-a58c-1b0667972de9\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.684342 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.684382 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.691057 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc" (OuterVolumeSpecName: "kube-api-access-smxfc") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "kube-api-access-smxfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.703443 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.703665 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.715875 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts" (OuterVolumeSpecName: "scripts") pod "9e462269-9089-4bbd-a58c-1b0667972de9" (UID: "9e462269-9089-4bbd-a58c-1b0667972de9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.731382 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785783 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785815 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smxfc\" (UniqueName: \"kubernetes.io/projected/9e462269-9089-4bbd-a58c-1b0667972de9-kube-api-access-smxfc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785826 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9e462269-9089-4bbd-a58c-1b0667972de9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785834 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785843 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9e462269-9089-4bbd-a58c-1b0667972de9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.785850 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9e462269-9089-4bbd-a58c-1b0667972de9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886311 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886387 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886419 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886438 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886458 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") pod \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\" (UID: \"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0\") " Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.886759 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.887343 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.890309 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.891033 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf" (OuterVolumeSpecName: "kube-api-access-7wzbf") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "kube-api-access-7wzbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.916430 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data" (OuterVolumeSpecName: "config-data") pod "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" (UID: "193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988521 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988555 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988567 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988579 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:14 crc kubenswrapper[4763]: I0131 15:15:14.988591 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wzbf\" (UniqueName: \"kubernetes.io/projected/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0-kube-api-access-7wzbf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.049576 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d85cde-9969-46d1-9e16-44c5747493cc" path="/var/lib/kubelet/pods/52d85cde-9969-46d1-9e16-44c5747493cc/volumes" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.050354 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" path="/var/lib/kubelet/pods/9e462269-9089-4bbd-a58c-1b0667972de9/volumes" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.102583 4763 scope.go:117] "RemoveContainer" containerID="df8f491bf5aacbab5a8630f25038d21050a63a1c4fbccf55da4b9be6e45da402" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.102585 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-vlp8r" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.106006 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" event={"ID":"193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0","Type":"ContainerDied","Data":"b119bcbbd6a3db6b88bff647faeec6bfa2e07015bb14d46d3cb45d7d5768e7de"} Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.106084 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.137665 4763 scope.go:117] "RemoveContainer" containerID="7408e446c82f19d8bca458d55cad69dd2743a440768ce18ecfc4933fcd0795d6" Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.137867 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.144469 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-59c4d74667-fdrd6"] Jan 31 15:15:15 crc kubenswrapper[4763]: I0131 15:15:15.155752 4763 scope.go:117] "RemoveContainer" containerID="c75bfd6add657441b6a71901eb789ee4a6824e5f58e98d0ae1677f3d7f9ec850" Jan 31 15:15:17 crc kubenswrapper[4763]: I0131 15:15:17.056211 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" path="/var/lib/kubelet/pods/193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0/volumes" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.683100 4763 scope.go:117] "RemoveContainer" containerID="23e3fe9f7b231da52853b276574f18bf79feb38b42567138be71d5b85f526157" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.709430 4763 scope.go:117] "RemoveContainer" containerID="0c5a179d917112c47df3d672325ac30e6e4efd61885f9377b2ea3e10d6c629b4" Jan 31 15:15:41 crc kubenswrapper[4763]: I0131 15:15:41.741813 4763 scope.go:117] "RemoveContainer" containerID="58e640168ef1b75e853e394649bc966d1036d4bb11ab8918c809f9ee7dee4196" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.433346 4763 generic.go:334] "Generic (PLEG): container finished" podID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerID="5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" exitCode=137 Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.433461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e"} Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.670473 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758680 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758768 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758852 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758883 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.758998 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") pod \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\" (UID: \"0f637cde-45f1-4c1e-b345-7f89e17eccc6\") " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.761919 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock" (OuterVolumeSpecName: "lock") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.761964 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache" (OuterVolumeSpecName: "cache") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767518 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2" (OuterVolumeSpecName: "kube-api-access-pd4v2") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "kube-api-access-pd4v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767795 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.767898 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "0f637cde-45f1-4c1e-b345-7f89e17eccc6" (UID: "0f637cde-45f1-4c1e-b345-7f89e17eccc6"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.860990 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd4v2\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-kube-api-access-pd4v2\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861041 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861051 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861060 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0f637cde-45f1-4c1e-b345-7f89e17eccc6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.861069 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0f637cde-45f1-4c1e-b345-7f89e17eccc6-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.872636 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:15:43 crc kubenswrapper[4763]: I0131 15:15:43.962606 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.177785 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.177845 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0f637cde-45f1-4c1e-b345-7f89e17eccc6","Type":"ContainerDied","Data":"6e2e9e90507c6c3f151001d225cf3ffd2d74fbb47248c3805eb15d884a5abba9"} Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445839 4763 scope.go:117] "RemoveContainer" containerID="9c5b86851732a69e6e48096dd6f1b888f12fe142cae2d0bd8bef364d2e611b8e" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.445862 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.462094 4763 scope.go:117] "RemoveContainer" containerID="5af94547d6eae1fcceeb9f2af0d269373db899e6b1f77d785fa71bb533c4ff1e" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.483490 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.484346 4763 scope.go:117] "RemoveContainer" containerID="370769a37b25c50b9d2a6455c870de2ac9df87a2e7618e148b0d0f1c2ea87ea4" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.491418 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.505539 4763 scope.go:117] "RemoveContainer" containerID="66b1c2a9de2f47ccee33ed4084db9a99570169d12537c16e6734017f17b75f44" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.530883 4763 scope.go:117] "RemoveContainer" containerID="6eded90931167b47c7a89da68469090108a9e35bce81085ea08e8704905bbdf6" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.553643 4763 scope.go:117] "RemoveContainer" containerID="f11355f4c90e474a9eb7e8bffad1d97535b6d05264b819d2663c38b16a03b8b0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.574100 4763 scope.go:117] "RemoveContainer" containerID="16d25b731fa680bf545bedd9ee8124c58230c5826aad06eaed21d44ebcb95b3f" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.593194 4763 scope.go:117] "RemoveContainer" containerID="eb52d0ca91a944d5c8ad2fa653c9fa6ee50c07649415a73ade8c5c4153e97392" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.613199 4763 scope.go:117] "RemoveContainer" containerID="ddb0e5002dc17cb138568e744262f32bbfd952fee6858962182e6dd4037f223b" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.629583 4763 scope.go:117] "RemoveContainer" containerID="38e47bd1962160574137a5d84c3fcba7de4d7e7d941eaa6fc85fc7d17c701c2b" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.647382 4763 scope.go:117] "RemoveContainer" containerID="7d80e2b71f4cf2dfe389248e41642348209b0bf63a38f66e95b3a2c6e7260234" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.665292 4763 scope.go:117] "RemoveContainer" containerID="0ed0e66180b91482a31099e92316c33c11bb10524bf629042109c378e3080682" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.684112 4763 scope.go:117] "RemoveContainer" containerID="485cbd77da69f3c2a508872f5ca6e889f79adf0023dc9705725e5bb72e9b68f0" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.702564 4763 scope.go:117] "RemoveContainer" containerID="42ec21618efaa5ddb06f0dc914d466c534df09064f84bd8dfd7d4e397b2993dc" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.716850 4763 scope.go:117] "RemoveContainer" containerID="ad6aed25c6e7195d02585e59f08a712946e5140839e590d00e1fd3b656f906e1" Jan 31 15:15:44 crc kubenswrapper[4763]: I0131 15:15:44.729904 4763 scope.go:117] "RemoveContainer" containerID="9deafbdec521654222135fae4a75189b7d212a38e26a9e35528481bb48dfe25d" Jan 31 15:15:45 crc kubenswrapper[4763]: I0131 15:15:45.051832 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" path="/var/lib/kubelet/pods/0f637cde-45f1-4c1e-b345-7f89e17eccc6/volumes" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627513 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627817 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627832 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627845 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627852 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627865 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627875 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627888 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627895 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627906 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627913 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627925 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627933 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627943 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627950 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627961 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627969 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.627981 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.627988 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628002 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628010 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628020 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628028 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628039 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628046 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628057 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628066 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628084 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628092 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628104 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628112 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628132 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628141 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628149 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628158 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628171 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628178 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.628190 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628198 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628357 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628370 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628383 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628394 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e462269-9089-4bbd-a58c-1b0667972de9" containerName="swift-ring-rebalance" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628404 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628416 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628424 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628433 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-auditor" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628441 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628451 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="rsync" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628463 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-httpd" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628474 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="container-sharder" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628483 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="swift-recon-cron" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628493 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="193f8f06-4347-44d2-bbeb-6c8a9ff6d0a0" containerName="proxy-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628504 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-reaper" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628516 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="account-server" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628527 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-replicator" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628537 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-expirer" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.628548 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f637cde-45f1-4c1e-b345-7f89e17eccc6" containerName="object-updater" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.633001 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.634751 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.634955 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.635466 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-29kdk" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.636109 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.654103 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.659202 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.671976 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.689749 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.697100 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.700803 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.720788 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801365 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801412 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801429 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801455 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801479 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801493 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801514 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801533 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801548 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801565 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801581 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801599 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801821 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801875 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.801938 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903599 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903620 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903644 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903668 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903686 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903719 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903736 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903753 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903766 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903798 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903828 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903848 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903860 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.403833198 +0000 UTC m=+1267.158571581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903776 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.903900 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.40387838 +0000 UTC m=+1267.158616673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903940 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903934 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.903965 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904054 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904098 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904135 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904159 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904292 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904391 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904657 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.905128 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.904747 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904794 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904849 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.905166 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.904831 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: E0131 15:15:46.905252 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.405230935 +0000 UTC m=+1267.159969248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.924485 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.924556 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.926489 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.927478 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.929431 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:46 crc kubenswrapper[4763]: I0131 15:15:46.935878 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.170049 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.171447 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.174110 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.180465 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309662 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309782 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309846 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309918 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.309942 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411320 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411454 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411488 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411546 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411606 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411658 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411749 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.411804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411492 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411848 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411888 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:47.911871873 +0000 UTC m=+1267.666610156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411550 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412045 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412074 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412064328 +0000 UTC m=+1268.166802711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.411832 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412091 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412114 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412106519 +0000 UTC m=+1268.166844932 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412178 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412187 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.412209 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.412200751 +0000 UTC m=+1268.166939144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.412526 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.412600 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.416146 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.433928 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: I0131 15:15:47.917788 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918067 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918110 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:47 crc kubenswrapper[4763]: E0131 15:15:47.918210 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:48.918182472 +0000 UTC m=+1268.672920805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426322 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426588 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426619 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426627 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.426661 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426830 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426844 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426830 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.426667528 +0000 UTC m=+1270.181405861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426887 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.426875293 +0000 UTC m=+1270.181613596 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426946 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.426987 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.427063 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.427036497 +0000 UTC m=+1270.181774790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: I0131 15:15:48.933823 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934118 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934624 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:48 crc kubenswrapper[4763]: E0131 15:15:48.934771 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:50.934731613 +0000 UTC m=+1270.689469946 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.403637 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.405985 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.409136 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.410202 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.424330 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459108 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459180 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459219 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459261 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459301 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459325 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459386 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459448 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459470 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.459618 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.460873 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459660 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461129 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459734 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461209 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461251 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461232918 +0000 UTC m=+1274.215971211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.459802 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461323 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461435 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461399252 +0000 UTC m=+1274.216137585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.461633 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.461616807 +0000 UTC m=+1274.216355210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.483622 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.484335 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-6r9bn ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" podUID="11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.491174 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.496186 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.503423 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561261 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561482 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561556 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561617 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561676 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561778 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.561877 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562231 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562424 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562653 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562771 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562852 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562922 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.562982 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.563346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.568034 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.568892 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.587276 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"swift-ring-rebalance-5mn9x\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664355 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664411 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664446 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664465 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664661 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664713 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") pod \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\" (UID: \"11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51\") " Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664890 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.664960 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665139 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665013 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts" (OuterVolumeSpecName: "scripts") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665493 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665730 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665796 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665813 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.665794 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666050 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666535 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.666946 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669483 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669644 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn" (OuterVolumeSpecName: "kube-api-access-6r9bn") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "kube-api-access-6r9bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.669963 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" (UID: "11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.670178 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.670205 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.686750 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"swift-ring-rebalance-cw985\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767361 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767581 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767714 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r9bn\" (UniqueName: \"kubernetes.io/projected/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-kube-api-access-6r9bn\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.767804 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.777904 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:15:50 crc kubenswrapper[4763]: I0131 15:15:50.970345 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970606 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970753 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:50 crc kubenswrapper[4763]: E0131 15:15:50.970838 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:15:54.970812592 +0000 UTC m=+1274.725550985 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.190839 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505241 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-5mn9x" Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505247 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerStarted","Data":"0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215"} Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.505857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerStarted","Data":"e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a"} Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.525115 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" podStartSLOduration=1.525084262 podStartE2EDuration="1.525084262s" podCreationTimestamp="2026-01-31 15:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:51.523251023 +0000 UTC m=+1271.277989356" watchObservedRunningTime="2026-01-31 15:15:51.525084262 +0000 UTC m=+1271.279822595" Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.563958 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:51 crc kubenswrapper[4763]: I0131 15:15:51.563999 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-5mn9x"] Jan 31 15:15:53 crc kubenswrapper[4763]: I0131 15:15:53.049949 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51" path="/var/lib/kubelet/pods/11f74d9f-37dc-49f7-9b0e-33bb6b1d8c51/volumes" Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.552636 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.552974 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553028 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.553050 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553122 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift podName:0a5bfb32-7eae-4b04-9aee-d0873f0c93b9 nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553088285 +0000 UTC m=+1282.307826618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift") pod "swift-storage-0" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9") : configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: I0131 15:15:54.553180 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553234 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553258 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553324 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift podName:8d1baecf-2b7e-418b-8c64-95b6551f365e nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553301271 +0000 UTC m=+1282.308039604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift") pod "swift-storage-1" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e") : configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553544 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553566 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 31 15:15:54 crc kubenswrapper[4763]: E0131 15:15:54.553612 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift podName:e6ace0ac-a7c8-4413-90ee-53d6bf699eef nodeName:}" failed. No retries permitted until 2026-01-31 15:16:02.553596749 +0000 UTC m=+1282.308335082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift") pod "swift-storage-2" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef") : configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: I0131 15:15:55.060623 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060760 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060773 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj: configmap "swift-ring-files" not found Jan 31 15:15:55 crc kubenswrapper[4763]: E0131 15:15:55.060810 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift podName:a76b9c98-a93e-4935-947c-9ecf237b7a97 nodeName:}" failed. No retries permitted until 2026-01-31 15:16:03.06079686 +0000 UTC m=+1282.815535153 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift") pod "swift-proxy-7d8cf99555-brqpj" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97") : configmap "swift-ring-files" not found Jan 31 15:16:01 crc kubenswrapper[4763]: I0131 15:16:01.586894 4763 generic.go:334] "Generic (PLEG): container finished" podID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerID="0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215" exitCode=0 Jan 31 15:16:01 crc kubenswrapper[4763]: I0131 15:16:01.587012 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerDied","Data":"0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215"} Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587459 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587533 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.587566 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.594837 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"swift-storage-1\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.594972 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"swift-storage-0\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.595655 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"swift-storage-2\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.627307 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.855392 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.873126 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.974117 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996336 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996423 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996598 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996655 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.996672 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:02 crc kubenswrapper[4763]: I0131 15:16:02.998459 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.002863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn" (OuterVolumeSpecName: "kube-api-access-7jxfn") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "kube-api-access-7jxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.021435 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts" (OuterVolumeSpecName: "scripts") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.037703 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.039554 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.088977 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") pod \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\" (UID: \"5a85fc7e-226b-498d-9156-c4a5ecf075b9\") " Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097595 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097814 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097831 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jxfn\" (UniqueName: \"kubernetes.io/projected/5a85fc7e-226b-498d-9156-c4a5ecf075b9-kube-api-access-7jxfn\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097846 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5a85fc7e-226b-498d-9156-c4a5ecf075b9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097862 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097871 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5a85fc7e-226b-498d-9156-c4a5ecf075b9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.097966 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5a85fc7e-226b-498d-9156-c4a5ecf075b9" (UID: "5a85fc7e-226b-498d-9156-c4a5ecf075b9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.104065 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"swift-proxy-7d8cf99555-brqpj\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.198817 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5a85fc7e-226b-498d-9156-c4a5ecf075b9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.288276 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:16:03 crc kubenswrapper[4763]: W0131 15:16:03.305922 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824 WatchSource:0}: Error finding container 17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824: Status 404 returned error can't find the container with id 17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824 Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.361453 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.394186 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.621260 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.621502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1d4cf1913d51894066c90a63ebfe91dd9186021fe6b288d04eb4138560d222cd"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640527 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640587 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640605 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.640617 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"54e9716be29a1a3e98b1c62af30cb48d8d18ed8bc33c1e831e29d90d1bbee6be"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642311 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" event={"ID":"5a85fc7e-226b-498d-9156-c4a5ecf075b9","Type":"ContainerDied","Data":"e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642342 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e919616136f82fa408c8b1e00ee331214a152822ef67ade82653a25aa1eb565a" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.642409 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-cw985" Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.655987 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.656041 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824"} Jan 31 15:16:03 crc kubenswrapper[4763]: I0131 15:16:03.880764 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.669999 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670240 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670252 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670261 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerStarted","Data":"d3926134dfc43380c3c616215cd0f1de55c1eb370aa433868d96185fd6990644"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.670275 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673236 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673271 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.673288 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687351 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687403 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687417 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.687430 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.693374 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podStartSLOduration=17.693350349 podStartE2EDuration="17.693350349s" podCreationTimestamp="2026-01-31 15:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:04.690296459 +0000 UTC m=+1284.445034762" watchObservedRunningTime="2026-01-31 15:16:04.693350349 +0000 UTC m=+1284.448088642" Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727802 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727838 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727847 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727857 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:16:04 crc kubenswrapper[4763]: I0131 15:16:04.727866 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758078 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758371 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758381 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758390 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.758398 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771041 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771103 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771112 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771121 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.771130 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.789939 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790134 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790190 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:16:05 crc kubenswrapper[4763]: I0131 15:16:05.790319 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809133 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809559 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.809599 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerStarted","Data":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822653 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822737 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822780 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822793 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.822804 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerStarted","Data":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.831266 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.831330 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerStarted","Data":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.863232 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.863209645 podStartE2EDuration="21.863209645s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.856130369 +0000 UTC m=+1286.610868692" watchObservedRunningTime="2026-01-31 15:16:06.863209645 +0000 UTC m=+1286.617947938" Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.958367 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=21.958345025 podStartE2EDuration="21.958345025s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.957191286 +0000 UTC m=+1286.711929599" watchObservedRunningTime="2026-01-31 15:16:06.958345025 +0000 UTC m=+1286.713083318" Jan 31 15:16:06 crc kubenswrapper[4763]: I0131 15:16:06.961523 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=21.961509069 podStartE2EDuration="21.961509069s" podCreationTimestamp="2026-01-31 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:06.916268269 +0000 UTC m=+1286.671006572" watchObservedRunningTime="2026-01-31 15:16:06.961509069 +0000 UTC m=+1286.716247362" Jan 31 15:16:13 crc kubenswrapper[4763]: I0131 15:16:13.397034 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:13 crc kubenswrapper[4763]: I0131 15:16:13.398566 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.177639 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.177783 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.552284 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:14 crc kubenswrapper[4763]: E0131 15:16:14.552794 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.552810 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.553019 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" containerName="swift-ring-rebalance" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.553626 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.556745 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.557463 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.568544 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692651 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692778 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.692867 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693000 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693055 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.693114 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794880 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794906 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.794932 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795025 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795045 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.795999 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.797286 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.797333 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.803201 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.805390 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.817950 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"swift-ring-rebalance-debug-gg6w8\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:14 crc kubenswrapper[4763]: I0131 15:16:14.877282 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.349333 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:15 crc kubenswrapper[4763]: W0131 15:16:15.351989 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1fdbe2_6748_4c51_ac3f_dab0229c44cc.slice/crio-67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896 WatchSource:0}: Error finding container 67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896: Status 404 returned error can't find the container with id 67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896 Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.924337 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerStarted","Data":"aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b"} Jan 31 15:16:15 crc kubenswrapper[4763]: I0131 15:16:15.924414 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerStarted","Data":"67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896"} Jan 31 15:16:17 crc kubenswrapper[4763]: I0131 15:16:17.942980 4763 generic.go:334] "Generic (PLEG): container finished" podID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerID="aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b" exitCode=0 Jan 31 15:16:17 crc kubenswrapper[4763]: I0131 15:16:17.943036 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" event={"ID":"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc","Type":"ContainerDied","Data":"aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b"} Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.308245 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.349804 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.356474 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463608 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463785 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.463910 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464003 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464070 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.464105 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") pod \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\" (UID: \"5f1fdbe2-6748-4c51-ac3f-dab0229c44cc\") " Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.465586 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.466863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.469188 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w" (OuterVolumeSpecName: "kube-api-access-hzq9w") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "kube-api-access-hzq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.486501 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:19 crc kubenswrapper[4763]: E0131 15:16:19.486862 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.486883 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.487075 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" containerName="swift-ring-rebalance" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.489985 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.495026 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.517041 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.520364 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.530291 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts" (OuterVolumeSpecName: "scripts") pod "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" (UID: "5f1fdbe2-6748-4c51-ac3f-dab0229c44cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566467 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566509 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566567 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566670 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.566849 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567049 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzq9w\" (UniqueName: \"kubernetes.io/projected/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-kube-api-access-hzq9w\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567071 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567085 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567096 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567108 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.567119 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668532 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668619 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668720 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668744 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668775 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.668804 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.669427 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.669562 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.670472 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.672242 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.672324 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.692450 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"swift-ring-rebalance-debug-r7mcn\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.892397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.965189 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67d0dcf7e61bf52cb933ac95c51f0d8509d1d5f271dc71ff2cd0b95c5b935896" Jan 31 15:16:19 crc kubenswrapper[4763]: I0131 15:16:19.965234 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-gg6w8" Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.334390 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.977385 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerStarted","Data":"c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd"} Jan 31 15:16:20 crc kubenswrapper[4763]: I0131 15:16:20.977731 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerStarted","Data":"d5916ddcab4c821a804d1bbf18735ca780e3635e1775c3acaf28cc394d8be895"} Jan 31 15:16:21 crc kubenswrapper[4763]: I0131 15:16:21.013057 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" podStartSLOduration=2.013016418 podStartE2EDuration="2.013016418s" podCreationTimestamp="2026-01-31 15:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:21.00030358 +0000 UTC m=+1300.755041903" watchObservedRunningTime="2026-01-31 15:16:21.013016418 +0000 UTC m=+1300.767754721" Jan 31 15:16:21 crc kubenswrapper[4763]: I0131 15:16:21.056307 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1fdbe2-6748-4c51-ac3f-dab0229c44cc" path="/var/lib/kubelet/pods/5f1fdbe2-6748-4c51-ac3f-dab0229c44cc/volumes" Jan 31 15:16:23 crc kubenswrapper[4763]: I0131 15:16:23.000364 4763 generic.go:334] "Generic (PLEG): container finished" podID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerID="c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd" exitCode=0 Jan 31 15:16:23 crc kubenswrapper[4763]: I0131 15:16:23.000497 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" event={"ID":"150fee52-9a9b-47cf-aeaf-1699d0cbe077","Type":"ContainerDied","Data":"c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd"} Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.406462 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.443104 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.452439 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550290 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550476 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550659 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550793 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.550851 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") pod \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\" (UID: \"150fee52-9a9b-47cf-aeaf-1699d0cbe077\") " Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.551155 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.551375 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.552038 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.557749 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp" (OuterVolumeSpecName: "kube-api-access-j8gkp") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "kube-api-access-j8gkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.572903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts" (OuterVolumeSpecName: "scripts") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.576185 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.579818 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "150fee52-9a9b-47cf-aeaf-1699d0cbe077" (UID: "150fee52-9a9b-47cf-aeaf-1699d0cbe077"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654383 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/150fee52-9a9b-47cf-aeaf-1699d0cbe077-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654421 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654436 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gkp\" (UniqueName: \"kubernetes.io/projected/150fee52-9a9b-47cf-aeaf-1699d0cbe077-kube-api-access-j8gkp\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654447 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/150fee52-9a9b-47cf-aeaf-1699d0cbe077-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.654458 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/150fee52-9a9b-47cf-aeaf-1699d0cbe077-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.886439 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:24 crc kubenswrapper[4763]: E0131 15:16:24.887095 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887120 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887287 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" containerName="swift-ring-rebalance" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.887987 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.900482 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957490 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957584 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957618 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957640 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957664 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:24 crc kubenswrapper[4763]: I0131 15:16:24.957803 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.020991 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5916ddcab4c821a804d1bbf18735ca780e3635e1775c3acaf28cc394d8be895" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.021034 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-r7mcn" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.058913 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059001 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059074 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059147 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059278 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.059359 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060293 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060827 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.060947 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.061868 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150fee52-9a9b-47cf-aeaf-1699d0cbe077" path="/var/lib/kubelet/pods/150fee52-9a9b-47cf-aeaf-1699d0cbe077/volumes" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.065812 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.077810 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.083055 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"swift-ring-rebalance-debug-dzfgh\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.205556 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:25 crc kubenswrapper[4763]: I0131 15:16:25.724250 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.034380 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerStarted","Data":"a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e"} Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.034448 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerStarted","Data":"eee7e3263b2dee12dc6fe7a42457ffbf6509bd6d4cff970117aec89f30e09e69"} Jan 31 15:16:26 crc kubenswrapper[4763]: I0131 15:16:26.072849 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" podStartSLOduration=2.072822614 podStartE2EDuration="2.072822614s" podCreationTimestamp="2026-01-31 15:16:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:26.063133406 +0000 UTC m=+1305.817871759" watchObservedRunningTime="2026-01-31 15:16:26.072822614 +0000 UTC m=+1305.827560947" Jan 31 15:16:28 crc kubenswrapper[4763]: I0131 15:16:28.049665 4763 generic.go:334] "Generic (PLEG): container finished" podID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerID="a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e" exitCode=0 Jan 31 15:16:28 crc kubenswrapper[4763]: I0131 15:16:28.049740 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" event={"ID":"a9c426eb-2a62-47e9-aa4a-53a30fe02b82","Type":"ContainerDied","Data":"a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e"} Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.330574 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.363029 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.372639 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh"] Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437133 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437179 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437236 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437314 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437341 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437383 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") pod \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\" (UID: \"a9c426eb-2a62-47e9-aa4a-53a30fe02b82\") " Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.437773 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438157 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438380 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.438399 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.442874 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv" (OuterVolumeSpecName: "kube-api-access-4vflv") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "kube-api-access-4vflv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.461589 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.475788 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts" (OuterVolumeSpecName: "scripts") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.485372 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a9c426eb-2a62-47e9-aa4a-53a30fe02b82" (UID: "a9c426eb-2a62-47e9-aa4a-53a30fe02b82"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540087 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540134 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vflv\" (UniqueName: \"kubernetes.io/projected/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-kube-api-access-4vflv\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540150 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:29 crc kubenswrapper[4763]: I0131 15:16:29.540167 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9c426eb-2a62-47e9-aa4a-53a30fe02b82-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.068087 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee7e3263b2dee12dc6fe7a42457ffbf6509bd6d4cff970117aec89f30e09e69" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.068183 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-dzfgh" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.524528 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:30 crc kubenswrapper[4763]: E0131 15:16:30.525035 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.525062 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.525336 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" containerName="swift-ring-rebalance" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.526143 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.529862 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.529867 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.536019 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.656837 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657201 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657247 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657303 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657350 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.657506 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759612 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759736 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759788 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759827 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.759882 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.760854 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.760976 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.761546 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.767155 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.767148 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.793844 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"swift-ring-rebalance-debug-bthkw\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:30 crc kubenswrapper[4763]: I0131 15:16:30.859448 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:31 crc kubenswrapper[4763]: I0131 15:16:31.052990 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c426eb-2a62-47e9-aa4a-53a30fe02b82" path="/var/lib/kubelet/pods/a9c426eb-2a62-47e9-aa4a-53a30fe02b82/volumes" Jan 31 15:16:31 crc kubenswrapper[4763]: I0131 15:16:31.303117 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.090650 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerStarted","Data":"2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014"} Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.090784 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerStarted","Data":"13d9bc5b8a0579a4aacab59b3629dad98778c85c9556b82cf66907dfc9566d29"} Jan 31 15:16:32 crc kubenswrapper[4763]: I0131 15:16:32.118883 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" podStartSLOduration=2.118859665 podStartE2EDuration="2.118859665s" podCreationTimestamp="2026-01-31 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:32.113138723 +0000 UTC m=+1311.867877036" watchObservedRunningTime="2026-01-31 15:16:32.118859665 +0000 UTC m=+1311.873597998" Jan 31 15:16:33 crc kubenswrapper[4763]: I0131 15:16:33.101707 4763 generic.go:334] "Generic (PLEG): container finished" podID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerID="2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014" exitCode=0 Jan 31 15:16:33 crc kubenswrapper[4763]: I0131 15:16:33.101764 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" event={"ID":"a43d57a1-ba43-46fb-a389-59805d5a576e","Type":"ContainerDied","Data":"2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014"} Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.459788 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.506021 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.514478 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-bthkw"] Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619849 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619915 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.619985 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620035 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620116 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620834 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.620942 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") pod \"a43d57a1-ba43-46fb-a389-59805d5a576e\" (UID: \"a43d57a1-ba43-46fb-a389-59805d5a576e\") " Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621239 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621626 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a43d57a1-ba43-46fb-a389-59805d5a576e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.621645 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.626340 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx" (OuterVolumeSpecName: "kube-api-access-4qcjx") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "kube-api-access-4qcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.643960 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts" (OuterVolumeSpecName: "scripts") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.652903 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.653769 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a43d57a1-ba43-46fb-a389-59805d5a576e" (UID: "a43d57a1-ba43-46fb-a389-59805d5a576e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722877 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722921 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a43d57a1-ba43-46fb-a389-59805d5a576e-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722930 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qcjx\" (UniqueName: \"kubernetes.io/projected/a43d57a1-ba43-46fb-a389-59805d5a576e-kube-api-access-4qcjx\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:34 crc kubenswrapper[4763]: I0131 15:16:34.722941 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a43d57a1-ba43-46fb-a389-59805d5a576e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.050480 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" path="/var/lib/kubelet/pods/a43d57a1-ba43-46fb-a389-59805d5a576e/volumes" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.117170 4763 scope.go:117] "RemoveContainer" containerID="2c8676d586d95c99312bd225b1346101b96d75d6e5d4002c693abb76994fc014" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.117192 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-bthkw" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.689186 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:35 crc kubenswrapper[4763]: E0131 15:16:35.689969 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.689993 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.690258 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43d57a1-ba43-46fb-a389-59805d5a576e" containerName="swift-ring-rebalance" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.690924 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.694890 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.697275 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.706462 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839075 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839192 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839239 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839270 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839413 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.839486 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941481 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941592 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941662 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941767 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941812 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.941845 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.942087 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.943168 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.944183 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.946404 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.946623 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:35 crc kubenswrapper[4763]: I0131 15:16:35.966609 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"swift-ring-rebalance-debug-8w4v4\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:36 crc kubenswrapper[4763]: I0131 15:16:36.025678 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:36 crc kubenswrapper[4763]: I0131 15:16:36.493112 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.163020 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerStarted","Data":"7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55"} Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.163334 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerStarted","Data":"099d7ea2b86fce3db22caaf0c1563ff510b9caeb509987782a85e05d35b47aab"} Jan 31 15:16:37 crc kubenswrapper[4763]: I0131 15:16:37.185511 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" podStartSLOduration=2.185494953 podStartE2EDuration="2.185494953s" podCreationTimestamp="2026-01-31 15:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:16:37.179593116 +0000 UTC m=+1316.934331409" watchObservedRunningTime="2026-01-31 15:16:37.185494953 +0000 UTC m=+1316.940233256" Jan 31 15:16:38 crc kubenswrapper[4763]: I0131 15:16:38.176722 4763 generic.go:334] "Generic (PLEG): container finished" podID="7dec991f-9426-402a-8f83-8547257d2b30" containerID="7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55" exitCode=0 Jan 31 15:16:38 crc kubenswrapper[4763]: I0131 15:16:38.176769 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" event={"ID":"7dec991f-9426-402a-8f83-8547257d2b30","Type":"ContainerDied","Data":"7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55"} Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.472362 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.522664 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.532188 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606485 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606549 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606573 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606763 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606793 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.606818 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") pod \"7dec991f-9426-402a-8f83-8547257d2b30\" (UID: \"7dec991f-9426-402a-8f83-8547257d2b30\") " Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.611894 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.611968 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612513 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" containerID="cri-o://9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612631 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" containerID="cri-o://afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612675 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" containerID="cri-o://f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612729 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" containerID="cri-o://483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612664 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" containerID="cri-o://1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612915 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" containerID="cri-o://1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612775 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" containerID="cri-o://400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613107 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" containerID="cri-o://3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612813 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" containerID="cri-o://619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613211 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" containerID="cri-o://73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" containerID="cri-o://c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.613266 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" containerID="cri-o://4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612799 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" containerID="cri-o://3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.612785 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" containerID="cri-o://cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.615022 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv" (OuterVolumeSpecName: "kube-api-access-496xv") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "kube-api-access-496xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.618782 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" containerID="cri-o://a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.619446 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.655387 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts" (OuterVolumeSpecName: "scripts") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.681908 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682421 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" containerID="cri-o://2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682885 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" containerID="cri-o://18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682951 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" containerID="cri-o://36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.682995 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" containerID="cri-o://a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683024 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" containerID="cri-o://aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683052 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" containerID="cri-o://9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683081 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" containerID="cri-o://4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683106 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" containerID="cri-o://811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683137 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" containerID="cri-o://433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683216 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" containerID="cri-o://539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683251 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" containerID="cri-o://5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683283 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" containerID="cri-o://46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683311 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" containerID="cri-o://d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683351 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" containerID="cri-o://6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.683387 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" containerID="cri-o://8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.691490 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692402 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" containerID="cri-o://0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692395 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" containerID="cri-o://2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692483 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" containerID="cri-o://4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692538 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" containerID="cri-o://98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692639 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" containerID="cri-o://be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692678 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" containerID="cri-o://bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692721 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" containerID="cri-o://92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692746 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" containerID="cri-o://ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692766 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" containerID="cri-o://fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692833 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" containerID="cri-o://9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692866 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" containerID="cri-o://6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692903 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" containerID="cri-o://86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692907 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" containerID="cri-o://710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692935 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" containerID="cri-o://1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.692964 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" containerID="cri-o://99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.700657 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.708787 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-cw985"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709921 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709951 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7dec991f-9426-402a-8f83-8547257d2b30-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709968 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-496xv\" (UniqueName: \"kubernetes.io/projected/7dec991f-9426-402a-8f83-8547257d2b30-kube-api-access-496xv\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.709978 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7dec991f-9426-402a-8f83-8547257d2b30-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720172 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720461 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" containerID="cri-o://2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.720899 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" containerID="cri-o://feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" gracePeriod=30 Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.734974 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.752243 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7dec991f-9426-402a-8f83-8547257d2b30" (UID: "7dec991f-9426-402a-8f83-8547257d2b30"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.814894 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:39 crc kubenswrapper[4763]: I0131 15:16:39.814941 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7dec991f-9426-402a-8f83-8547257d2b30-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: E0131 15:16:40.149798 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1baecf_2b7e_418b_8c64_95b6551f365e.slice/crio-conmon-a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda76b9c98_a93e_4935_947c_9ecf237b7a97.slice/crio-conmon-feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1baecf_2b7e_418b_8c64_95b6551f365e.slice/crio-a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209233 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209265 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209275 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209283 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209290 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209298 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209304 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209313 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209319 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209325 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209332 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209269 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209369 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209386 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209396 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209404 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209412 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209420 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209428 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209439 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209447 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.209454 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.216980 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217004 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217010 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217017 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217023 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217028 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217035 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217041 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217047 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217053 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217058 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217064 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217070 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217113 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217137 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217147 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217155 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217163 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217171 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217179 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217188 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217196 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217205 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217213 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217222 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.217229 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239457 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239677 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239711 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239719 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239726 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239732 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239793 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239799 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239805 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239811 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239817 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239822 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239754 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239885 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239896 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239904 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239913 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239920 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239928 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239936 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239944 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239952 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.239961 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.244959 4763 generic.go:334] "Generic (PLEG): container finished" podID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerID="feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.244996 4763 generic.go:334] "Generic (PLEG): container finished" podID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerID="2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" exitCode=0 Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.245088 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.245126 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c"} Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.247133 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="099d7ea2b86fce3db22caaf0c1563ff510b9caeb509987782a85e05d35b47aab" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.247204 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-8w4v4" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.399561 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529241 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529435 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529482 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529530 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") pod \"a76b9c98-a93e-4935-947c-9ecf237b7a97\" (UID: \"a76b9c98-a93e-4935-947c-9ecf237b7a97\") " Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529865 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.529875 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.530399 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.530422 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a76b9c98-a93e-4935-947c-9ecf237b7a97-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.533438 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c" (OuterVolumeSpecName: "kube-api-access-f758c") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "kube-api-access-f758c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.534203 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.577261 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data" (OuterVolumeSpecName: "config-data") pod "a76b9c98-a93e-4935-947c-9ecf237b7a97" (UID: "a76b9c98-a93e-4935-947c-9ecf237b7a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632097 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f758c\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-kube-api-access-f758c\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632136 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76b9c98-a93e-4935-947c-9ecf237b7a97-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:40 crc kubenswrapper[4763]: I0131 15:16:40.632146 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a76b9c98-a93e-4935-947c-9ecf237b7a97-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.059809 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a85fc7e-226b-498d-9156-c4a5ecf075b9" path="/var/lib/kubelet/pods/5a85fc7e-226b-498d-9156-c4a5ecf075b9/volumes" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.063624 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dec991f-9426-402a-8f83-8547257d2b30" path="/var/lib/kubelet/pods/7dec991f-9426-402a-8f83-8547257d2b30/volumes" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.259919 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.260450 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj" event={"ID":"a76b9c98-a93e-4935-947c-9ecf237b7a97","Type":"ContainerDied","Data":"d3926134dfc43380c3c616215cd0f1de55c1eb370aa433868d96185fd6990644"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.260560 4763 scope.go:117] "RemoveContainer" containerID="feb9eca2f3460795772517c1281dac16f555d0d80bd9ba799ff73f601b5b71c8" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274906 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274955 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.274974 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275009 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275081 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.275112 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.291928 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.292029 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.298688 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301758 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301783 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" exitCode=0 Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301803 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.301831 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.302720 4763 scope.go:117] "RemoveContainer" containerID="2bbe980cdd23a298b995253469039eca5a71af206ca6fa11e2f7a2a5c08d968c" Jan 31 15:16:41 crc kubenswrapper[4763]: I0131 15:16:41.309656 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-brqpj"] Jan 31 15:16:43 crc kubenswrapper[4763]: I0131 15:16:43.067055 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" path="/var/lib/kubelet/pods/a76b9c98-a93e-4935-947c-9ecf237b7a97/volumes" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177795 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177878 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.177936 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.179316 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.179823 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" gracePeriod=600 Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346118 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" exitCode=0 Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346192 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554"} Jan 31 15:16:44 crc kubenswrapper[4763]: I0131 15:16:44.346267 4763 scope.go:117] "RemoveContainer" containerID="f20a9db2f936557dee18355e3894646df84495361a6df22f393e9e76f8aebb8e" Jan 31 15:16:45 crc kubenswrapper[4763]: I0131 15:16:45.364477 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} Jan 31 15:17:03 crc kubenswrapper[4763]: E0131 15:17:03.906463 4763 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/77ce0e0be23474712532b9234efc385243afef81f67985d754eda69b40d16bc7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/77ce0e0be23474712532b9234efc385243afef81f67985d754eda69b40d16bc7/diff: no such file or directory, extraDiskErr: Jan 31 15:17:09 crc kubenswrapper[4763]: E0131 15:17:09.945544 4763 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6ace0ac_a7c8_4413_90ee_53d6bf699eef.slice/crio-conmon-bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5bfb32_7eae_4b04_9aee_d0873f0c93b9.slice/crio-36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c.scope\": RecentStats: unable to find data in memory cache]" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.172985 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.184626 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.188707 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244794 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244876 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244930 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.244985 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245028 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245097 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245148 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245218 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245320 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245349 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245386 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245421 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") pod \"8d1baecf-2b7e-418b-8c64-95b6551f365e\" (UID: \"8d1baecf-2b7e-418b-8c64-95b6551f365e\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245471 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") pod \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\" (UID: \"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.245498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\" (UID: \"e6ace0ac-a7c8-4413-90ee-53d6bf699eef\") " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.248414 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock" (OuterVolumeSpecName: "lock") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.249213 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock" (OuterVolumeSpecName: "lock") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.249295 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache" (OuterVolumeSpecName: "cache") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.250124 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock" (OuterVolumeSpecName: "lock") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.252084 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache" (OuterVolumeSpecName: "cache") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253432 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253582 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253798 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.253863 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b" (OuterVolumeSpecName: "kube-api-access-8r45b") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "kube-api-access-8r45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254087 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8" (OuterVolumeSpecName: "kube-api-access-n9hz8") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "kube-api-access-n9hz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254273 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e6ace0ac-a7c8-4413-90ee-53d6bf699eef" (UID: "e6ace0ac-a7c8-4413-90ee-53d6bf699eef"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254407 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l" (OuterVolumeSpecName: "kube-api-access-sp55l") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "kube-api-access-sp55l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254512 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" (UID: "0a5bfb32-7eae-4b04-9aee-d0873f0c93b9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.254558 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache" (OuterVolumeSpecName: "cache") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.256314 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "swift") pod "8d1baecf-2b7e-418b-8c64-95b6551f365e" (UID: "8d1baecf-2b7e-418b-8c64-95b6551f365e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347127 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347205 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347222 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347233 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347247 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp55l\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-kube-api-access-sp55l\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347268 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347280 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347292 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347302 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8d1baecf-2b7e-418b-8c64-95b6551f365e-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347313 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r45b\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-kube-api-access-8r45b\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347324 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347333 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347343 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9hz8\" (UniqueName: \"kubernetes.io/projected/e6ace0ac-a7c8-4413-90ee-53d6bf699eef-kube-api-access-n9hz8\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347360 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.347371 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8d1baecf-2b7e-418b-8c64-95b6551f365e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.360623 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.366560 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.370314 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448361 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448394 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.448406 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640110 4763 generic.go:334] "Generic (PLEG): container finished" podID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640209 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640349 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640479 4763 scope.go:117] "RemoveContainer" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.640464 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"8d1baecf-2b7e-418b-8c64-95b6551f365e","Type":"ContainerDied","Data":"1d4cf1913d51894066c90a63ebfe91dd9186021fe6b288d04eb4138560d222cd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653434 4763 generic.go:334] "Generic (PLEG): container finished" podID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653546 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653607 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e6ace0ac-a7c8-4413-90ee-53d6bf699eef","Type":"ContainerDied","Data":"54e9716be29a1a3e98b1c62af30cb48d8d18ed8bc33c1e831e29d90d1bbee6be"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653626 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653643 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653651 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653659 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653667 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653677 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653686 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653711 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.653719 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.654566 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669685 4763 generic.go:334] "Generic (PLEG): container finished" podID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerID="36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" exitCode=137 Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669814 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669843 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669858 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669865 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669870 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669877 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669884 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669890 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669896 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669902 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669908 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669916 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669922 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669928 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669933 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669939 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669955 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"0a5bfb32-7eae-4b04-9aee-d0873f0c93b9","Type":"ContainerDied","Data":"17d6e2ea7e01a1661ce12761f2f04f8c25d3d7cfb02f9b9ef0ef2888b0bc4824"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669968 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669975 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669981 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669988 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.669995 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670001 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670007 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670014 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670020 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670027 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670033 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670040 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670046 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670053 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670061 4763 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad"} Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.670185 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.690173 4763 scope.go:117] "RemoveContainer" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.696924 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.705858 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.717962 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.724089 4763 scope.go:117] "RemoveContainer" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.725538 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.740665 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.745900 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.749738 4763 scope.go:117] "RemoveContainer" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.781517 4763 scope.go:117] "RemoveContainer" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.809045 4763 scope.go:117] "RemoveContainer" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.830539 4763 scope.go:117] "RemoveContainer" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.850196 4763 scope.go:117] "RemoveContainer" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.874060 4763 scope.go:117] "RemoveContainer" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.895654 4763 scope.go:117] "RemoveContainer" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.921677 4763 scope.go:117] "RemoveContainer" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.938569 4763 scope.go:117] "RemoveContainer" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.958082 4763 scope.go:117] "RemoveContainer" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.972110 4763 scope.go:117] "RemoveContainer" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:10 crc kubenswrapper[4763]: I0131 15:17:10.999267 4763 scope.go:117] "RemoveContainer" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.018875 4763 scope.go:117] "RemoveContainer" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.019312 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": container with ID starting with c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a not found: ID does not exist" containerID="c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019354 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a"} err="failed to get container status \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": rpc error: code = NotFound desc = could not find container \"c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a\": container with ID starting with c8c0e78ca1151a94155419bd0418b77c66fc386384f22eb1e0e7dc241590a78a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019380 4763 scope.go:117] "RemoveContainer" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.019773 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": container with ID starting with a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24 not found: ID does not exist" containerID="a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019803 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24"} err="failed to get container status \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": rpc error: code = NotFound desc = could not find container \"a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24\": container with ID starting with a2e002a6a1e3984c24d1cacf4ede42929c80f438b7e4fd3d25cc84706f472d24 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.019821 4763 scope.go:117] "RemoveContainer" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020087 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": container with ID starting with 1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594 not found: ID does not exist" containerID="1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020128 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594"} err="failed to get container status \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": rpc error: code = NotFound desc = could not find container \"1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594\": container with ID starting with 1200ff3e3aca2458c95ff5ac8deaa7609c141f64452aa6fc569865a2a07ba594 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020157 4763 scope.go:117] "RemoveContainer" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020441 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": container with ID starting with 3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef not found: ID does not exist" containerID="3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020466 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef"} err="failed to get container status \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": rpc error: code = NotFound desc = could not find container \"3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef\": container with ID starting with 3fc180b723e4c8f13af50f16bf6dd031dc4b30e6837b659d53f2126db991cdef not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020486 4763 scope.go:117] "RemoveContainer" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.020855 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": container with ID starting with 73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4 not found: ID does not exist" containerID="73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020928 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4"} err="failed to get container status \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": rpc error: code = NotFound desc = could not find container \"73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4\": container with ID starting with 73ce9e2e7676ac3fba63ec19a18765b3c9e2f7127b13b82b0a8da016af057bd4 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.020970 4763 scope.go:117] "RemoveContainer" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021307 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": container with ID starting with 4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a not found: ID does not exist" containerID="4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021349 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a"} err="failed to get container status \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": rpc error: code = NotFound desc = could not find container \"4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a\": container with ID starting with 4125ef4f4111a2faea3497473d4e28a723f901956d45be9b6df5be593bcb094a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021375 4763 scope.go:117] "RemoveContainer" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021627 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": container with ID starting with 1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406 not found: ID does not exist" containerID="1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021660 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406"} err="failed to get container status \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": rpc error: code = NotFound desc = could not find container \"1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406\": container with ID starting with 1edb0b7f2c7c7855ec59a3e46c12b4d9f8f2128670389c78068047481b466406 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021686 4763 scope.go:117] "RemoveContainer" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.021973 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": container with ID starting with afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a not found: ID does not exist" containerID="afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.021998 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a"} err="failed to get container status \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": rpc error: code = NotFound desc = could not find container \"afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a\": container with ID starting with afce4d6edbdd41898c2d05bcfd465a50bf7fbe4f924d1536d022f9b7bde6cc4a not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022018 4763 scope.go:117] "RemoveContainer" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022287 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": container with ID starting with f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe not found: ID does not exist" containerID="f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022320 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe"} err="failed to get container status \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": rpc error: code = NotFound desc = could not find container \"f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe\": container with ID starting with f10d79c1c2a03423d0f8e9bea742ee0f71b0de111c6c2b0530f79b1bb13590fe not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022343 4763 scope.go:117] "RemoveContainer" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022605 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": container with ID starting with 483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2 not found: ID does not exist" containerID="483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022629 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2"} err="failed to get container status \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": rpc error: code = NotFound desc = could not find container \"483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2\": container with ID starting with 483ce81d4f59c1977d7f239e53747fee35232b1656fc7602c1a77108638d9db2 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022650 4763 scope.go:117] "RemoveContainer" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.022939 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": container with ID starting with 400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971 not found: ID does not exist" containerID="400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022969 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971"} err="failed to get container status \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": rpc error: code = NotFound desc = could not find container \"400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971\": container with ID starting with 400f9226fbc81108ff15e964994bc02c7044985a7b88bdcfc5b9ec38e0ec0971 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.022988 4763 scope.go:117] "RemoveContainer" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023250 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": container with ID starting with cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634 not found: ID does not exist" containerID="cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023272 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634"} err="failed to get container status \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": rpc error: code = NotFound desc = could not find container \"cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634\": container with ID starting with cde84086c356956bf7c7e9e311a2b5c0dfd023d642e5c211c956f0606f3b6634 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023288 4763 scope.go:117] "RemoveContainer" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023476 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": container with ID starting with 3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d not found: ID does not exist" containerID="3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023504 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d"} err="failed to get container status \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": rpc error: code = NotFound desc = could not find container \"3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d\": container with ID starting with 3343428eafc3c7fe632334cb15bc487e6a2cf486419894cbf84ef0039e8d4a0d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023523 4763 scope.go:117] "RemoveContainer" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.023911 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": container with ID starting with 619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d not found: ID does not exist" containerID="619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023930 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d"} err="failed to get container status \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": rpc error: code = NotFound desc = could not find container \"619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d\": container with ID starting with 619d6d7dc92d20efbdc5a9e71ba3bb545609a50170ca45c2f90260cb8b92859d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.023943 4763 scope.go:117] "RemoveContainer" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.024247 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": container with ID starting with 9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb not found: ID does not exist" containerID="9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.024274 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb"} err="failed to get container status \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": rpc error: code = NotFound desc = could not find container \"9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb\": container with ID starting with 9ce27e519b496e795e29d01f6e35ba0dbe740066395319ccd8400796bf4190cb not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.024291 4763 scope.go:117] "RemoveContainer" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.044772 4763 scope.go:117] "RemoveContainer" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.064387 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" path="/var/lib/kubelet/pods/0a5bfb32-7eae-4b04-9aee-d0873f0c93b9/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.067137 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" path="/var/lib/kubelet/pods/8d1baecf-2b7e-418b-8c64-95b6551f365e/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.068839 4763 scope.go:117] "RemoveContainer" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.069118 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" path="/var/lib/kubelet/pods/e6ace0ac-a7c8-4413-90ee-53d6bf699eef/volumes" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.085957 4763 scope.go:117] "RemoveContainer" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.102296 4763 scope.go:117] "RemoveContainer" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.120728 4763 scope.go:117] "RemoveContainer" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.137955 4763 scope.go:117] "RemoveContainer" containerID="92ad90a8f7a31ed2cf85f0757c71ef34da8f0c359347e86a4bcef54ce85516a6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.156878 4763 scope.go:117] "RemoveContainer" containerID="fbf72198234fda54505fa079d75aa79a7acaa34fd24b31ddd293f0aae93e0c93" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.175787 4763 scope.go:117] "RemoveContainer" containerID="9a8de8d27e0778796c6a4baff03a1d1d922e9ae78f97728f4ae4aaf7fa341563" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.193305 4763 scope.go:117] "RemoveContainer" containerID="86c85c15b961f6795b6b5dc674ffc310871f76d3dbb1eb7225326dc937e30e64" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.218645 4763 scope.go:117] "RemoveContainer" containerID="1351528e398244f79dc02492c56eb7c0b5cf97f5af4ab495fdf34673f2be4e3a" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.236912 4763 scope.go:117] "RemoveContainer" containerID="6bbf9c9a63d33d2c6b6e0178b1ade00637993606e119582492e66b6a24013ab8" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.255424 4763 scope.go:117] "RemoveContainer" containerID="710120310f88e5c722eacaf49bc091b604c9117b11eb65ce63fd698be02b6699" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.278299 4763 scope.go:117] "RemoveContainer" containerID="99ceb2ba2010e4648343e1238ccdc00c944820acf210df397fc1aa1ebd073a62" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.297568 4763 scope.go:117] "RemoveContainer" containerID="0c8b0727815b22780ef52e9340937734001c60d592a0efa293171a6a99f631b8" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.327949 4763 scope.go:117] "RemoveContainer" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.328673 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": container with ID starting with bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e not found: ID does not exist" containerID="bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.328728 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e"} err="failed to get container status \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": rpc error: code = NotFound desc = could not find container \"bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e\": container with ID starting with bce9cd376705065c5f2f54f669ef4c9cccbd07ea487a0242cf40f637878b3c9e not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.328748 4763 scope.go:117] "RemoveContainer" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329092 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": container with ID starting with ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6 not found: ID does not exist" containerID="ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329117 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6"} err="failed to get container status \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": rpc error: code = NotFound desc = could not find container \"ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6\": container with ID starting with ec009865c52a8d1e41b5965f9b645455af51d67839966fb510adf5cdeeb90ab6 not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329171 4763 scope.go:117] "RemoveContainer" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329468 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": container with ID starting with 2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b not found: ID does not exist" containerID="2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329491 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b"} err="failed to get container status \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": rpc error: code = NotFound desc = could not find container \"2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b\": container with ID starting with 2939485c2d7229a2f4e693e47c132dadba16e2fdb364592f267d9a10abee7f3b not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329505 4763 scope.go:117] "RemoveContainer" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.329811 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": container with ID starting with 4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b not found: ID does not exist" containerID="4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329838 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b"} err="failed to get container status \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": rpc error: code = NotFound desc = could not find container \"4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b\": container with ID starting with 4992aa25b0e802e8ef4afebdda60e6de87c86a4603281c770436bd0619a4396b not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.329873 4763 scope.go:117] "RemoveContainer" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.330109 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": container with ID starting with 98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d not found: ID does not exist" containerID="98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330174 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d"} err="failed to get container status \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": rpc error: code = NotFound desc = could not find container \"98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d\": container with ID starting with 98598fa6e6e265c08f4b896f2e536f3ceb4a0027cac1512dcf4f464c4d6a0e2d not found: ID does not exist" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330219 4763 scope.go:117] "RemoveContainer" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: E0131 15:17:11.330491 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": container with ID starting with be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832 not found: ID does not exist" containerID="be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832" Jan 31 15:17:11 crc kubenswrapper[4763]: I0131 15:17:11.330516 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832"} err="failed to get container status \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": rpc error: code = NotFound desc = could not find container \"be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832\": container with ID starting with be15a349543095a208d3733f5743e24532655dd5a6a6458457ba577f54adc832 not found: ID does not exist" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.590444 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591163 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591174 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591186 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591192 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591198 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591204 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591214 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591221 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591229 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591235 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591245 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591251 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591258 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591263 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591274 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591279 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591289 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591295 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591302 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591307 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591316 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591321 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591330 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591337 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591343 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591348 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591357 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591363 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591370 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591375 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591385 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591390 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591400 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591415 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591422 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591431 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591436 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591444 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591450 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591458 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591463 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591469 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591475 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591484 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591489 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591495 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591500 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591507 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591512 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591518 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591523 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591531 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591537 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591545 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591550 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591559 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591564 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591572 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591580 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591589 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591595 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591620 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591626 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591635 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591640 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591646 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591651 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591660 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591665 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591672 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591677 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591685 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591703 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591714 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591720 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591727 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591733 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591741 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591746 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591755 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591760 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591767 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591772 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591781 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591786 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591798 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591811 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591820 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591825 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591834 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591840 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591848 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591854 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.591861 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591867 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591977 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591986 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-httpd" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.591993 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592000 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dec991f-9426-402a-8f83-8547257d2b30" containerName="swift-ring-rebalance" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592007 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592015 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592022 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592032 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592039 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592046 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592055 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592062 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76b9c98-a93e-4935-947c-9ecf237b7a97" containerName="proxy-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592068 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592077 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592084 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592091 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592096 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592104 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592111 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592126 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592133 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592141 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592148 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592155 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592160 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592167 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592176 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592185 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592193 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592200 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592207 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592215 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592223 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592232 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592239 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592245 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592252 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-updater" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592259 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592266 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592272 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-server" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592279 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="rsync" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592287 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="account-reaper" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592294 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="container-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592300 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1baecf-2b7e-418b-8c64-95b6551f365e" containerName="object-expirer" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592308 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ace0ac-a7c8-4413-90ee-53d6bf699eef" containerName="swift-recon-cron" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592316 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="account-replicator" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.592324 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5bfb32-7eae-4b04-9aee-d0873f0c93b9" containerName="object-auditor" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.595652 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598874 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598921 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.598951 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-ngwjv" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.599243 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.630014 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722649 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722710 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722736 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722765 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.722789 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824370 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824428 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824451 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824491 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824732 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824788 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824742 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") device mount path \"/mnt/openstack/pv05\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.824857 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:14.324836164 +0000 UTC m=+1354.079574517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.824999 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.825261 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.846541 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.862211 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.876922 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.877820 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.880800 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.881079 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.881309 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.889890 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: I0131 15:17:13.913183 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:13 crc kubenswrapper[4763]: E0131 15:17:13.930546 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-rbs9q ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" podUID="0e216b50-34a4-4079-a8eb-2bd926eda934" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026331 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026379 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026434 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026457 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026476 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.026748 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128364 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128417 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128516 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128550 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.128569 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129159 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129533 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.129975 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.133212 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.139209 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.151913 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"swift-ring-rebalance-9d572\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.332347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.332992 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.333034 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.333200 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:15.33313546 +0000 UTC m=+1355.087873793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.487226 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.489142 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.500192 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636565 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636709 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636785 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636834 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.636898 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.726979 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.734134 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737842 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737940 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.737995 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738032 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738069 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738186 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738217 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: E0131 15:17:14.738284 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:15.238261303 +0000 UTC m=+1354.992999616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738421 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.738583 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.744889 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.754121 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.838890 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839005 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839036 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839106 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839149 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839193 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") pod \"0e216b50-34a4-4079-a8eb-2bd926eda934\" (UID: \"0e216b50-34a4-4079-a8eb-2bd926eda934\") " Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839296 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.839600 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840079 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts" (OuterVolumeSpecName: "scripts") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840570 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/0e216b50-34a4-4079-a8eb-2bd926eda934-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840609 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.840631 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/0e216b50-34a4-4079-a8eb-2bd926eda934-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.842425 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.843219 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.846866 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q" (OuterVolumeSpecName: "kube-api-access-rbs9q") pod "0e216b50-34a4-4079-a8eb-2bd926eda934" (UID: "0e216b50-34a4-4079-a8eb-2bd926eda934"). InnerVolumeSpecName "kube-api-access-rbs9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942237 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942284 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/0e216b50-34a4-4079-a8eb-2bd926eda934-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:14 crc kubenswrapper[4763]: I0131 15:17:14.942303 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbs9q\" (UniqueName: \"kubernetes.io/projected/0e216b50-34a4-4079-a8eb-2bd926eda934-kube-api-access-rbs9q\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.248609 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248832 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248846 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.248914 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:16.24887358 +0000 UTC m=+1356.003611873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.349772 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.349992 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.350020 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: E0131 15:17:15.350089 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:17.350068232 +0000 UTC m=+1357.104806525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.734726 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9d572" Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.770091 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:15 crc kubenswrapper[4763]: I0131 15:17:15.777231 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9d572"] Jan 31 15:17:16 crc kubenswrapper[4763]: I0131 15:17:16.263499 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263669 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263685 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:16 crc kubenswrapper[4763]: E0131 15:17:16.263761 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:18.263739297 +0000 UTC m=+1358.018477600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: I0131 15:17:17.050856 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e216b50-34a4-4079-a8eb-2bd926eda934" path="/var/lib/kubelet/pods/0e216b50-34a4-4079-a8eb-2bd926eda934/volumes" Jan 31 15:17:17 crc kubenswrapper[4763]: I0131 15:17:17.380522 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380849 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380881 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:17 crc kubenswrapper[4763]: E0131 15:17:17.380957 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:21.380929024 +0000 UTC m=+1361.135667367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: I0131 15:17:18.296302 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.296643 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.296990 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:18 crc kubenswrapper[4763]: E0131 15:17:18.297104 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:22.297065075 +0000 UTC m=+1362.051803418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: I0131 15:17:21.454617 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.454877 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.455151 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:21 crc kubenswrapper[4763]: E0131 15:17:21.455210 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:29.455191934 +0000 UTC m=+1369.209930227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: I0131 15:17:22.368445 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368883 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368920 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:22 crc kubenswrapper[4763]: E0131 15:17:22.368997 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:30.368974523 +0000 UTC m=+1370.123712856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: I0131 15:17:29.481109 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481419 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481754 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:29 crc kubenswrapper[4763]: E0131 15:17:29.481823 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:17:45.48180527 +0000 UTC m=+1385.236543563 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: I0131 15:17:30.392261 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392591 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392642 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:30 crc kubenswrapper[4763]: E0131 15:17:30.392790 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:17:46.392749963 +0000 UTC m=+1386.147488316 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: I0131 15:17:45.486912 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487187 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487604 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:17:45 crc kubenswrapper[4763]: E0131 15:17:45.487673 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:18:17.487649084 +0000 UTC m=+1417.242387387 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: I0131 15:17:46.399307 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.399589 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.399902 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:17:46 crc kubenswrapper[4763]: E0131 15:17:46.400007 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:18:18.399978105 +0000 UTC m=+1418.154716438 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.069253 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.071421 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.082415 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138275 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138461 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.138483 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239552 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239607 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.239672 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.240165 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.240288 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.261346 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"redhat-marketplace-fdvt6\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.393213 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:08 crc kubenswrapper[4763]: I0131 15:18:08.679797 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219667 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" exitCode=0 Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2"} Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.219761 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"5cc5ef5e63ad76ca638a72e06d1076b5d187e70eab209b3913d347757ee3caf2"} Jan 31 15:18:09 crc kubenswrapper[4763]: I0131 15:18:09.221795 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:18:10 crc kubenswrapper[4763]: I0131 15:18:10.227461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} Jan 31 15:18:11 crc kubenswrapper[4763]: I0131 15:18:11.236210 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" exitCode=0 Jan 31 15:18:11 crc kubenswrapper[4763]: I0131 15:18:11.236285 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} Jan 31 15:18:12 crc kubenswrapper[4763]: I0131 15:18:12.246235 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerStarted","Data":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} Jan 31 15:18:17 crc kubenswrapper[4763]: I0131 15:18:17.575437 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.575811 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.576183 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:18:17 crc kubenswrapper[4763]: E0131 15:18:17.576247 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:19:21.576227515 +0000 UTC m=+1481.330965808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.394264 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.394356 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.462881 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.489734 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489835 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489867 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: E0131 15:18:18.489927 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:19:22.489908281 +0000 UTC m=+1482.244646574 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:18:18 crc kubenswrapper[4763]: I0131 15:18:18.496453 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fdvt6" podStartSLOduration=8.067956899 podStartE2EDuration="10.496426505s" podCreationTimestamp="2026-01-31 15:18:08 +0000 UTC" firstStartedPulling="2026-01-31 15:18:09.221557405 +0000 UTC m=+1408.976295698" lastFinishedPulling="2026-01-31 15:18:11.650027011 +0000 UTC m=+1411.404765304" observedRunningTime="2026-01-31 15:18:12.270991853 +0000 UTC m=+1412.025730146" watchObservedRunningTime="2026-01-31 15:18:18.496426505 +0000 UTC m=+1418.251164838" Jan 31 15:18:19 crc kubenswrapper[4763]: I0131 15:18:19.367416 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:19 crc kubenswrapper[4763]: I0131 15:18:19.423277 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.324391 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fdvt6" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" containerID="cri-o://53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" gracePeriod=2 Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.792122 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851067 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851271 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.851313 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") pod \"5bf14829-3173-4bb3-9696-9f721465d757\" (UID: \"5bf14829-3173-4bb3-9696-9f721465d757\") " Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.853127 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities" (OuterVolumeSpecName: "utilities") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.857688 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb" (OuterVolumeSpecName: "kube-api-access-hdrbb") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "kube-api-access-hdrbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.874754 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bf14829-3173-4bb3-9696-9f721465d757" (UID: "5bf14829-3173-4bb3-9696-9f721465d757"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954427 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954465 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bf14829-3173-4bb3-9696-9f721465d757-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:21 crc kubenswrapper[4763]: I0131 15:18:21.954483 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrbb\" (UniqueName: \"kubernetes.io/projected/5bf14829-3173-4bb3-9696-9f721465d757-kube-api-access-hdrbb\") on node \"crc\" DevicePath \"\"" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.339897 4763 generic.go:334] "Generic (PLEG): container finished" podID="5bf14829-3173-4bb3-9696-9f721465d757" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" exitCode=0 Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340025 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fdvt6" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340348 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340424 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fdvt6" event={"ID":"5bf14829-3173-4bb3-9696-9f721465d757","Type":"ContainerDied","Data":"5cc5ef5e63ad76ca638a72e06d1076b5d187e70eab209b3913d347757ee3caf2"} Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.340466 4763 scope.go:117] "RemoveContainer" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.368172 4763 scope.go:117] "RemoveContainer" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.422440 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.423851 4763 scope.go:117] "RemoveContainer" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.436148 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fdvt6"] Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.448795 4763 scope.go:117] "RemoveContainer" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.449469 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": container with ID starting with 53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0 not found: ID does not exist" containerID="53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.449520 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0"} err="failed to get container status \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": rpc error: code = NotFound desc = could not find container \"53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0\": container with ID starting with 53a71c2b15bce0cdce8c74ed72ce672d58c66bc19364680f17a65ab7856d47f0 not found: ID does not exist" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.449555 4763 scope.go:117] "RemoveContainer" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.450055 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": container with ID starting with 3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2 not found: ID does not exist" containerID="3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450162 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2"} err="failed to get container status \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": rpc error: code = NotFound desc = could not find container \"3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2\": container with ID starting with 3c9f08651352eadf7e902ff1934fabd97c1c939a0cf09c6d4a4061e49fb297f2 not found: ID does not exist" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450243 4763 scope.go:117] "RemoveContainer" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: E0131 15:18:22.450678 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": container with ID starting with a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2 not found: ID does not exist" containerID="a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2" Jan 31 15:18:22 crc kubenswrapper[4763]: I0131 15:18:22.450756 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2"} err="failed to get container status \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": rpc error: code = NotFound desc = could not find container \"a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2\": container with ID starting with a52e4ec71f76618f2bd45eaeadeda4c6ef82478d57289deeeeb19fe736c611d2 not found: ID does not exist" Jan 31 15:18:23 crc kubenswrapper[4763]: I0131 15:18:23.054904 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf14829-3173-4bb3-9696-9f721465d757" path="/var/lib/kubelet/pods/5bf14829-3173-4bb3-9696-9f721465d757/volumes" Jan 31 15:18:44 crc kubenswrapper[4763]: I0131 15:18:44.177349 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:18:44 crc kubenswrapper[4763]: I0131 15:18:44.177824 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:14 crc kubenswrapper[4763]: I0131 15:19:14.177821 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:19:14 crc kubenswrapper[4763]: I0131 15:19:14.178598 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:16 crc kubenswrapper[4763]: E0131 15:19:16.610359 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:19:16 crc kubenswrapper[4763]: I0131 15:19:16.788908 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:19:17 crc kubenswrapper[4763]: E0131 15:19:17.514917 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:19:17 crc kubenswrapper[4763]: I0131 15:19:17.797112 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:19:21 crc kubenswrapper[4763]: I0131 15:19:21.597118 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597315 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597570 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:19:21 crc kubenswrapper[4763]: E0131 15:19:21.597630 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:21:23.597611615 +0000 UTC m=+1603.352349928 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: I0131 15:19:22.506907 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507098 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507347 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:19:22 crc kubenswrapper[4763]: E0131 15:19:22.507417 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:21:24.507393967 +0000 UTC m=+1604.262132280 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:19:41 crc kubenswrapper[4763]: I0131 15:19:41.989495 4763 scope.go:117] "RemoveContainer" containerID="6f58266364b48d61b8c86fc9324d5893f7c7c65dee6b93a4c08f45a44a010cb1" Jan 31 15:19:42 crc kubenswrapper[4763]: I0131 15:19:42.026131 4763 scope.go:117] "RemoveContainer" containerID="5c8011a9d18428d101c91127624cd31f0ce315158e322ff2f55edf51f5e08669" Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.179975 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.180282 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:19:44 crc kubenswrapper[4763]: I0131 15:19:44.180336 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.030645 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.030750 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" gracePeriod=600 Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.107752 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:19:45 crc kubenswrapper[4763]: I0131 15:19:45.134132 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-jh8vr"] Jan 31 15:19:45 crc kubenswrapper[4763]: E0131 15:19:45.155805 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.051788 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" exitCode=0 Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.052034 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053"} Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.052394 4763 scope.go:117] "RemoveContainer" containerID="ed9e66445ed11fb3e31f897826819c87a5cfa2eaf20ae073d2a90c461528b554" Jan 31 15:19:46 crc kubenswrapper[4763]: I0131 15:19:46.053783 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:19:46 crc kubenswrapper[4763]: E0131 15:19:46.054962 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:19:47 crc kubenswrapper[4763]: I0131 15:19:47.052504 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196347d8-7892-4b32-8bc2-0127439a95f0" path="/var/lib/kubelet/pods/196347d8-7892-4b32-8bc2-0127439a95f0/volumes" Jan 31 15:20:00 crc kubenswrapper[4763]: I0131 15:20:00.042058 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:00 crc kubenswrapper[4763]: E0131 15:20:00.042855 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:13 crc kubenswrapper[4763]: I0131 15:20:13.045338 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:13 crc kubenswrapper[4763]: E0131 15:20:13.046147 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.964055 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965446 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-content" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965548 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-content" Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965631 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965723 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: E0131 15:20:15.965822 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-utilities" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.965897 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="extract-utilities" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.966115 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf14829-3173-4bb3-9696-9f721465d757" containerName="registry-server" Jan 31 15:20:15 crc kubenswrapper[4763]: I0131 15:20:15.967397 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:15.984823 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081247 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081298 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.081352 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182296 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182351 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182399 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182861 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.182954 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.203802 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"redhat-operators-l2zf2\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.352468 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:16 crc kubenswrapper[4763]: I0131 15:20:16.769399 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.295910 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" exitCode=0 Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.296011 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4"} Jan 31 15:20:17 crc kubenswrapper[4763]: I0131 15:20:17.296980 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"ad9a9c84dd32b517aa4dbfc52a3a6a70335067ab4c3d7b4f79559ecde622b549"} Jan 31 15:20:18 crc kubenswrapper[4763]: I0131 15:20:18.306514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} Jan 31 15:20:19 crc kubenswrapper[4763]: I0131 15:20:19.315157 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" exitCode=0 Jan 31 15:20:19 crc kubenswrapper[4763]: I0131 15:20:19.315227 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} Jan 31 15:20:20 crc kubenswrapper[4763]: I0131 15:20:20.325577 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerStarted","Data":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} Jan 31 15:20:20 crc kubenswrapper[4763]: I0131 15:20:20.348434 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l2zf2" podStartSLOduration=2.947922591 podStartE2EDuration="5.348416584s" podCreationTimestamp="2026-01-31 15:20:15 +0000 UTC" firstStartedPulling="2026-01-31 15:20:17.296950848 +0000 UTC m=+1537.051689151" lastFinishedPulling="2026-01-31 15:20:19.697444851 +0000 UTC m=+1539.452183144" observedRunningTime="2026-01-31 15:20:20.343429763 +0000 UTC m=+1540.098168056" watchObservedRunningTime="2026-01-31 15:20:20.348416584 +0000 UTC m=+1540.103154877" Jan 31 15:20:25 crc kubenswrapper[4763]: I0131 15:20:25.042677 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:25 crc kubenswrapper[4763]: E0131 15:20:25.043581 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:26 crc kubenswrapper[4763]: I0131 15:20:26.353272 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:26 crc kubenswrapper[4763]: I0131 15:20:26.354265 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:27 crc kubenswrapper[4763]: I0131 15:20:27.414401 4763 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l2zf2" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" probeResult="failure" output=< Jan 31 15:20:27 crc kubenswrapper[4763]: timeout: failed to connect service ":50051" within 1s Jan 31 15:20:27 crc kubenswrapper[4763]: > Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.409276 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.455574 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:36 crc kubenswrapper[4763]: I0131 15:20:36.644228 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:37 crc kubenswrapper[4763]: I0131 15:20:37.460185 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l2zf2" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" containerID="cri-o://6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" gracePeriod=2 Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.042948 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.043769 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.437206 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469205 4763 generic.go:334] "Generic (PLEG): container finished" podID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" exitCode=0 Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469262 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469292 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l2zf2" event={"ID":"f9db86f1-db8a-45fd-9ffa-b2476ff8d085","Type":"ContainerDied","Data":"ad9a9c84dd32b517aa4dbfc52a3a6a70335067ab4c3d7b4f79559ecde622b549"} Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469310 4763 scope.go:117] "RemoveContainer" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.469449 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l2zf2" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.492526 4763 scope.go:117] "RemoveContainer" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.525924 4763 scope.go:117] "RemoveContainer" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.554712 4763 scope.go:117] "RemoveContainer" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.558181 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": container with ID starting with 6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349 not found: ID does not exist" containerID="6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558223 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349"} err="failed to get container status \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": rpc error: code = NotFound desc = could not find container \"6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349\": container with ID starting with 6352387d652aae3447bd0c58ef69a79eb6cb93a13321db0b8055c2234def3349 not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558255 4763 scope.go:117] "RemoveContainer" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.558945 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": container with ID starting with 58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a not found: ID does not exist" containerID="58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558975 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a"} err="failed to get container status \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": rpc error: code = NotFound desc = could not find container \"58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a\": container with ID starting with 58514587e5bfd0cd80b5759bb1257c7217e0e34b545d0e1fa419eeecd09ebc8a not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.558989 4763 scope.go:117] "RemoveContainer" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: E0131 15:20:38.559427 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": container with ID starting with da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4 not found: ID does not exist" containerID="da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.559456 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4"} err="failed to get container status \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": rpc error: code = NotFound desc = could not find container \"da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4\": container with ID starting with da34f9f7c50bbd1345e52de9e57a73748df285653e8d216bf22f537b94e406b4 not found: ID does not exist" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.600739 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") pod \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\" (UID: \"f9db86f1-db8a-45fd-9ffa-b2476ff8d085\") " Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.601663 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities" (OuterVolumeSpecName: "utilities") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.606370 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd" (OuterVolumeSpecName: "kube-api-access-6zdhd") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "kube-api-access-6zdhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.702580 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.702646 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdhd\" (UniqueName: \"kubernetes.io/projected/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-kube-api-access-6zdhd\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.764650 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9db86f1-db8a-45fd-9ffa-b2476ff8d085" (UID: "f9db86f1-db8a-45fd-9ffa-b2476ff8d085"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.817136 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9db86f1-db8a-45fd-9ffa-b2476ff8d085-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.821182 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:38 crc kubenswrapper[4763]: I0131 15:20:38.832098 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l2zf2"] Jan 31 15:20:39 crc kubenswrapper[4763]: I0131 15:20:39.052554 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" path="/var/lib/kubelet/pods/f9db86f1-db8a-45fd-9ffa-b2476ff8d085/volumes" Jan 31 15:20:42 crc kubenswrapper[4763]: I0131 15:20:42.103799 4763 scope.go:117] "RemoveContainer" containerID="203b36129261a511c80fe2b8e1a92066fc0b81cc45cfe796a72d0edaa9da1993" Jan 31 15:20:42 crc kubenswrapper[4763]: I0131 15:20:42.129415 4763 scope.go:117] "RemoveContainer" containerID="84fd8b869419477646b3303789d7d4ce59277c7782af2bb140c22752aadb6987" Jan 31 15:20:51 crc kubenswrapper[4763]: I0131 15:20:51.047360 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:20:51 crc kubenswrapper[4763]: E0131 15:20:51.048301 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.039748 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.046544 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.056712 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-xn2nh"] Jan 31 15:21:00 crc kubenswrapper[4763]: I0131 15:21:00.066835 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-38d2-account-create-update-cpmns"] Jan 31 15:21:01 crc kubenswrapper[4763]: I0131 15:21:01.054800 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1994b227-dbc6-494a-886d-4573eee02640" path="/var/lib/kubelet/pods/1994b227-dbc6-494a-886d-4573eee02640/volumes" Jan 31 15:21:01 crc kubenswrapper[4763]: I0131 15:21:01.055330 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a1a2199-bf73-476a-8a6b-c50b1c26aa6c" path="/var/lib/kubelet/pods/4a1a2199-bf73-476a-8a6b-c50b1c26aa6c/volumes" Jan 31 15:21:04 crc kubenswrapper[4763]: I0131 15:21:04.042195 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:04 crc kubenswrapper[4763]: E0131 15:21:04.042668 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:17 crc kubenswrapper[4763]: I0131 15:21:17.049644 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:21:17 crc kubenswrapper[4763]: I0131 15:21:17.054387 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-cfz59"] Jan 31 15:21:18 crc kubenswrapper[4763]: I0131 15:21:18.042361 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:18 crc kubenswrapper[4763]: E0131 15:21:18.043203 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:19 crc kubenswrapper[4763]: I0131 15:21:19.055156 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576fbbd2-e600-40a9-95f4-2772c96807f1" path="/var/lib/kubelet/pods/576fbbd2-e600-40a9-95f4-2772c96807f1/volumes" Jan 31 15:21:19 crc kubenswrapper[4763]: E0131 15:21:19.791294 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:21:19 crc kubenswrapper[4763]: I0131 15:21:19.844534 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:21:20 crc kubenswrapper[4763]: E0131 15:21:20.798008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:21:20 crc kubenswrapper[4763]: I0131 15:21:20.853310 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:21:23 crc kubenswrapper[4763]: I0131 15:21:23.639871 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640084 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640114 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:21:23 crc kubenswrapper[4763]: E0131 15:21:23.640205 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:23:25.64017921 +0000 UTC m=+1725.394917533 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.037012 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.045362 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-tjb5m"] Jan 31 15:21:24 crc kubenswrapper[4763]: I0131 15:21:24.555301 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555505 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555819 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:21:24 crc kubenswrapper[4763]: E0131 15:21:24.555905 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:23:26.555879382 +0000 UTC m=+1726.310617705 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:21:25 crc kubenswrapper[4763]: I0131 15:21:25.065309 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3e2d68-7406-4653-85ed-41746d3a6ea7" path="/var/lib/kubelet/pods/9b3e2d68-7406-4653-85ed-41746d3a6ea7/volumes" Jan 31 15:21:29 crc kubenswrapper[4763]: I0131 15:21:29.041655 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:29 crc kubenswrapper[4763]: E0131 15:21:29.042518 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:40 crc kubenswrapper[4763]: I0131 15:21:40.041607 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:40 crc kubenswrapper[4763]: E0131 15:21:40.043336 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.200786 4763 scope.go:117] "RemoveContainer" containerID="dc43ea79353044d825fa92732b826ecb7bb9d81d79e11fde2b7c3d0258701fc2" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.225930 4763 scope.go:117] "RemoveContainer" containerID="8180dd910ac6efc8ad5f3d6dcb80cc45662cc7fb7c88809c730b03aa35ba8bc3" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.271940 4763 scope.go:117] "RemoveContainer" containerID="22a73cc01e38d3368c2378a8856a884268bdaab9f5443ff62ac66e26d223ed89" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.304397 4763 scope.go:117] "RemoveContainer" containerID="d4b2fbc8cb2358be37b442ce5253eaaf842807de68a087674a3ea1292f2dd38e" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.333300 4763 scope.go:117] "RemoveContainer" containerID="6b27f13fa86685c4b37caba09090beecec2d3e1290d084b6ae1cf269665b318e" Jan 31 15:21:42 crc kubenswrapper[4763]: I0131 15:21:42.377354 4763 scope.go:117] "RemoveContainer" containerID="411cd9ca106798ab981b147bd785e0f4defae6f019c51c98fdfff57480304b59" Jan 31 15:21:53 crc kubenswrapper[4763]: I0131 15:21:53.042548 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:21:53 crc kubenswrapper[4763]: E0131 15:21:53.043899 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.048112 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.054362 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.060670 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-fxrtm"] Jan 31 15:22:00 crc kubenswrapper[4763]: I0131 15:22:00.066489 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-0181-account-create-update-96pj2"] Jan 31 15:22:01 crc kubenswrapper[4763]: I0131 15:22:01.050410 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464b92bd-fb87-4fc5-aa90-5460b1e35eec" path="/var/lib/kubelet/pods/464b92bd-fb87-4fc5-aa90-5460b1e35eec/volumes" Jan 31 15:22:01 crc kubenswrapper[4763]: I0131 15:22:01.051321 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be55f4fd-e12f-4bcf-ab19-d71977f3e6ec" path="/var/lib/kubelet/pods/be55f4fd-e12f-4bcf-ab19-d71977f3e6ec/volumes" Jan 31 15:22:08 crc kubenswrapper[4763]: I0131 15:22:08.042523 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:08 crc kubenswrapper[4763]: E0131 15:22:08.043566 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:21 crc kubenswrapper[4763]: I0131 15:22:21.048659 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:21 crc kubenswrapper[4763]: E0131 15:22:21.049811 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:36 crc kubenswrapper[4763]: I0131 15:22:36.041844 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:36 crc kubenswrapper[4763]: E0131 15:22:36.043117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.494175 4763 scope.go:117] "RemoveContainer" containerID="4007498dac8fd3b8123d95c0037cc87ebfb6676c6920014087622d824443355a" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.516749 4763 scope.go:117] "RemoveContainer" containerID="a193fbbc6d4c1b876140e857534a3e3476054468d680fb5032ef95fc43449eec" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.538060 4763 scope.go:117] "RemoveContainer" containerID="6436423e21963e2b944d7661270d579b854013f8d01c88fd036a9b4a15a61846" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.553478 4763 scope.go:117] "RemoveContainer" containerID="2a92b2bd4fc8d6dca2fcd21333d8ea9dc1bea550c48e4c4023603133141572ad" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.571753 4763 scope.go:117] "RemoveContainer" containerID="aa5b58c7ff93f3c15da7f7f96ecebd65245380f1df8d802df393a3683573f42b" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.596079 4763 scope.go:117] "RemoveContainer" containerID="aa4453d984b8623efdeb6c14caeec9684c9789a6bbf3f070fda2ae53d211bc67" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.610518 4763 scope.go:117] "RemoveContainer" containerID="9716359f4da3a71b9390a156a58d83e5742038cd7c70d2aaa6dddd57c4c7402f" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.626372 4763 scope.go:117] "RemoveContainer" containerID="18e785ee82660570c7a3c1f8a825fe58e5562959092c137ca7e8ad12f67b2cdf" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.642180 4763 scope.go:117] "RemoveContainer" containerID="d21b593c360d355cf6f976e7a33dd5c9a7af1da589078440cac056c5b3195552" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.654819 4763 scope.go:117] "RemoveContainer" containerID="539252f99bc3e6264ace92eb1a171fa552e64dc1e5ab28a67e1806e3665008b7" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.671896 4763 scope.go:117] "RemoveContainer" containerID="36471f31ca52c785e38c6b2b66ccd5857f6ef478d9ab8974c38189fbf0e27a7c" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.691184 4763 scope.go:117] "RemoveContainer" containerID="a540ed0c915c9ec8346e959a99b0e8cef75297ffb67063cbd5e427a00b227441" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.708008 4763 scope.go:117] "RemoveContainer" containerID="fee6200b81ed36139edc76fff1de6f650a35a10f71c9521569ecb3d4c7be34df" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.732974 4763 scope.go:117] "RemoveContainer" containerID="433775c4538892b2b06557027ca728e6b8b86916941810cde9bf0aaa7cec78dd" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.756859 4763 scope.go:117] "RemoveContainer" containerID="5de93fda3e4eeec09c3e0cab0c53f8bd9bc5a576f08e2f6a3e358bb501d0aee7" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.771658 4763 scope.go:117] "RemoveContainer" containerID="0df182416b7bb3077be071e835d5e5e14d5a8b304cf30505e3ab3257400dd215" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.789102 4763 scope.go:117] "RemoveContainer" containerID="7de2f6d754dc155c1220dbc7207385fffee69382d5745f470315b5df89030e55" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.813801 4763 scope.go:117] "RemoveContainer" containerID="c60e5d86263ce2d232f363b5e6d0ca6b837b598ceba6f85ee0bd89925cf0c6dd" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.835207 4763 scope.go:117] "RemoveContainer" containerID="a341e6df1a37494965c9886e4a13005e2a4f651428e838086726ac3163d9cf3e" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.853293 4763 scope.go:117] "RemoveContainer" containerID="811bfef1f45e1c8bfb983ccdb9950299ee77652d3e6b07f11a1f38dcaa006989" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.865656 4763 scope.go:117] "RemoveContainer" containerID="46bb08319fc51c2ac2e298b3d88809c97e2206ce13b2762bb97ee19fa37761d9" Jan 31 15:22:42 crc kubenswrapper[4763]: I0131 15:22:42.882296 4763 scope.go:117] "RemoveContainer" containerID="8743be8babd948dd0c5cadcbb327888a64b81d91784dcd88e4edc80760703747" Jan 31 15:22:51 crc kubenswrapper[4763]: I0131 15:22:51.047638 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:22:51 crc kubenswrapper[4763]: E0131 15:22:51.048368 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:05 crc kubenswrapper[4763]: I0131 15:23:05.042575 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:05 crc kubenswrapper[4763]: E0131 15:23:05.043186 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:19 crc kubenswrapper[4763]: I0131 15:23:19.041562 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:19 crc kubenswrapper[4763]: E0131 15:23:19.042597 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:22 crc kubenswrapper[4763]: E0131 15:23:22.846852 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:23:22 crc kubenswrapper[4763]: I0131 15:23:22.916531 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:23:23 crc kubenswrapper[4763]: E0131 15:23:23.855345 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:23:23 crc kubenswrapper[4763]: I0131 15:23:23.925418 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:23:25 crc kubenswrapper[4763]: I0131 15:23:25.692263 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692449 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692716 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:23:25 crc kubenswrapper[4763]: E0131 15:23:25.692767 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:25:27.692750066 +0000 UTC m=+1847.447488359 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: I0131 15:23:26.603791 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.603990 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.604007 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:23:26 crc kubenswrapper[4763]: E0131 15:23:26.604069 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:25:28.604052675 +0000 UTC m=+1848.358790968 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:23:30 crc kubenswrapper[4763]: I0131 15:23:30.041329 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:30 crc kubenswrapper[4763]: E0131 15:23:30.042003 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:43 crc kubenswrapper[4763]: I0131 15:23:43.042727 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:43 crc kubenswrapper[4763]: E0131 15:23:43.044015 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:23:56 crc kubenswrapper[4763]: I0131 15:23:56.041896 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:23:56 crc kubenswrapper[4763]: E0131 15:23:56.044893 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:08 crc kubenswrapper[4763]: I0131 15:24:08.042036 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:08 crc kubenswrapper[4763]: E0131 15:24:08.043008 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:22 crc kubenswrapper[4763]: I0131 15:24:22.045426 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:22 crc kubenswrapper[4763]: E0131 15:24:22.046787 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:35 crc kubenswrapper[4763]: I0131 15:24:35.043129 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:35 crc kubenswrapper[4763]: E0131 15:24:35.044216 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:24:47 crc kubenswrapper[4763]: I0131 15:24:47.042264 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:24:47 crc kubenswrapper[4763]: I0131 15:24:47.649581 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} Jan 31 15:25:25 crc kubenswrapper[4763]: E0131 15:25:25.918566 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:25:25 crc kubenswrapper[4763]: I0131 15:25:25.982048 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:25:26 crc kubenswrapper[4763]: E0131 15:25:26.927842 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:25:26 crc kubenswrapper[4763]: I0131 15:25:26.990679 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:25:27 crc kubenswrapper[4763]: I0131 15:25:27.753115 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") pod \"swift-storage-0\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753273 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753286 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:25:27 crc kubenswrapper[4763]: E0131 15:25:27.753329 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift podName:7390eb43-2a86-4ad9-b504-6fca814daf1c nodeName:}" failed. No retries permitted until 2026-01-31 15:27:29.753317846 +0000 UTC m=+1969.508056139 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift") pod "swift-storage-0" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c") : configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: I0131 15:25:28.669274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669475 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669509 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:25:28 crc kubenswrapper[4763]: E0131 15:25:28.669594 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:27:30.669568264 +0000 UTC m=+1970.424306587 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.368989 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369611 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-content" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369622 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-content" Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369636 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-utilities" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369643 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="extract-utilities" Jan 31 15:25:39 crc kubenswrapper[4763]: E0131 15:25:39.369657 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369664 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.369801 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9db86f1-db8a-45fd-9ffa-b2476ff8d085" containerName="registry-server" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.370641 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.399522 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463220 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463290 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.463317 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565231 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565547 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.565645 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.566234 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.566344 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.589222 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"community-operators-5fm2z\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:39 crc kubenswrapper[4763]: I0131 15:25:39.690839 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:40 crc kubenswrapper[4763]: I0131 15:25:40.192339 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.116419 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" exitCode=0 Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.116509 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f"} Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.118057 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"071f2fd9f37cb5bf2b59bf23debbd381506e38b2b6d8f101edfb4c67a327bfe5"} Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.121143 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.181831 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.183070 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.204233 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.292136 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.292893 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.293089 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.394799 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.394893 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395000 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395434 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.395538 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.422504 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"certified-operators-mb7gx\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.500686 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:41 crc kubenswrapper[4763]: I0131 15:25:41.766133 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:41 crc kubenswrapper[4763]: W0131 15:25:41.767451 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb8791e5_8470_4f7e_bf4b_ef5f031b179e.slice/crio-cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91 WatchSource:0}: Error finding container cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91: Status 404 returned error can't find the container with id cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91 Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130231 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" exitCode=0 Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130294 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d"} Jan 31 15:25:42 crc kubenswrapper[4763]: I0131 15:25:42.130637 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerStarted","Data":"cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91"} Jan 31 15:25:43 crc kubenswrapper[4763]: I0131 15:25:43.146545 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.165299 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" exitCode=0 Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.165426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb"} Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.169181 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" exitCode=0 Jan 31 15:25:44 crc kubenswrapper[4763]: I0131 15:25:44.169225 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.178502 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerStarted","Data":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.181016 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerStarted","Data":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.213939 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5fm2z" podStartSLOduration=2.774084409 podStartE2EDuration="6.213922699s" podCreationTimestamp="2026-01-31 15:25:39 +0000 UTC" firstStartedPulling="2026-01-31 15:25:41.120773186 +0000 UTC m=+1860.875511489" lastFinishedPulling="2026-01-31 15:25:44.560611456 +0000 UTC m=+1864.315349779" observedRunningTime="2026-01-31 15:25:45.211028695 +0000 UTC m=+1864.965766998" watchObservedRunningTime="2026-01-31 15:25:45.213922699 +0000 UTC m=+1864.968660992" Jan 31 15:25:45 crc kubenswrapper[4763]: I0131 15:25:45.233338 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mb7gx" podStartSLOduration=1.751121996 podStartE2EDuration="4.23331862s" podCreationTimestamp="2026-01-31 15:25:41 +0000 UTC" firstStartedPulling="2026-01-31 15:25:42.132879946 +0000 UTC m=+1861.887618259" lastFinishedPulling="2026-01-31 15:25:44.61507658 +0000 UTC m=+1864.369814883" observedRunningTime="2026-01-31 15:25:45.230345664 +0000 UTC m=+1864.985083967" watchObservedRunningTime="2026-01-31 15:25:45.23331862 +0000 UTC m=+1864.988056923" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.691411 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.691796 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:49 crc kubenswrapper[4763]: I0131 15:25:49.773184 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:50 crc kubenswrapper[4763]: I0131 15:25:50.303242 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:50 crc kubenswrapper[4763]: I0131 15:25:50.761097 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.501385 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.501615 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:51 crc kubenswrapper[4763]: I0131 15:25:51.574909 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.245921 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5fm2z" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" containerID="cri-o://73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" gracePeriod=2 Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.299218 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.606268 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790893 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790939 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.790971 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") pod \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\" (UID: \"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74\") " Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.792016 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities" (OuterVolumeSpecName: "utilities") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.801066 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm" (OuterVolumeSpecName: "kube-api-access-dtmxm") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "kube-api-access-dtmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.846546 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" (UID: "cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892393 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892434 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtmxm\" (UniqueName: \"kubernetes.io/projected/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-kube-api-access-dtmxm\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:52 crc kubenswrapper[4763]: I0131 15:25:52.892449 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255549 4763 generic.go:334] "Generic (PLEG): container finished" podID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" exitCode=0 Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255627 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255993 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5fm2z" event={"ID":"cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74","Type":"ContainerDied","Data":"071f2fd9f37cb5bf2b59bf23debbd381506e38b2b6d8f101edfb4c67a327bfe5"} Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.255684 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5fm2z" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.256053 4763 scope.go:117] "RemoveContainer" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.282641 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.283314 4763 scope.go:117] "RemoveContainer" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.295879 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5fm2z"] Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.311586 4763 scope.go:117] "RemoveContainer" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329158 4763 scope.go:117] "RemoveContainer" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.329603 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": container with ID starting with 73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04 not found: ID does not exist" containerID="73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329644 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04"} err="failed to get container status \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": rpc error: code = NotFound desc = could not find container \"73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04\": container with ID starting with 73efebac0af0a165f8d16dc2593528a2c5a81698f1864355b25ce33bcded4b04 not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.329676 4763 scope.go:117] "RemoveContainer" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.330053 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": container with ID starting with 3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d not found: ID does not exist" containerID="3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330085 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d"} err="failed to get container status \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": rpc error: code = NotFound desc = could not find container \"3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d\": container with ID starting with 3207bddceca56fd2aad0e5f8e6690da11a44b26fd3b2da123e60405c81c7ef1d not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330109 4763 scope.go:117] "RemoveContainer" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: E0131 15:25:53.330400 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": container with ID starting with 4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f not found: ID does not exist" containerID="4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.330430 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f"} err="failed to get container status \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": rpc error: code = NotFound desc = could not find container \"4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f\": container with ID starting with 4bb8a75f0306dd74eceb1adaa955bf5bd177901f0843665d95f8845cd830fa7f not found: ID does not exist" Jan 31 15:25:53 crc kubenswrapper[4763]: I0131 15:25:53.564717 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.075555 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" path="/var/lib/kubelet/pods/cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74/volumes" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.272688 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mb7gx" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" containerID="cri-o://6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" gracePeriod=2 Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.727619 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842493 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842755 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.842814 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") pod \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\" (UID: \"cb8791e5-8470-4f7e-bf4b-ef5f031b179e\") " Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.843735 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities" (OuterVolumeSpecName: "utilities") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.851037 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t" (OuterVolumeSpecName: "kube-api-access-8cz8t") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "kube-api-access-8cz8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.887683 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb8791e5-8470-4f7e-bf4b-ef5f031b179e" (UID: "cb8791e5-8470-4f7e-bf4b-ef5f031b179e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944842 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944884 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cz8t\" (UniqueName: \"kubernetes.io/projected/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-kube-api-access-8cz8t\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:55 crc kubenswrapper[4763]: I0131 15:25:55.944898 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb8791e5-8470-4f7e-bf4b-ef5f031b179e-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285736 4763 generic.go:334] "Generic (PLEG): container finished" podID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" exitCode=0 Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285780 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mb7gx" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285798 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285867 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mb7gx" event={"ID":"cb8791e5-8470-4f7e-bf4b-ef5f031b179e","Type":"ContainerDied","Data":"cff51285f5c8618d9bb64cc7ac41f7d2aa2e96174e4b0d0fdf2766d3cd566f91"} Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.285898 4763 scope.go:117] "RemoveContainer" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.304085 4763 scope.go:117] "RemoveContainer" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.324880 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.334554 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mb7gx"] Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.351316 4763 scope.go:117] "RemoveContainer" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.385777 4763 scope.go:117] "RemoveContainer" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.386289 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": container with ID starting with 6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c not found: ID does not exist" containerID="6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386354 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c"} err="failed to get container status \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": rpc error: code = NotFound desc = could not find container \"6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c\": container with ID starting with 6b1f152a73b056090916e21805d313af388056d023a8025363ce7ee6fcaa6a7c not found: ID does not exist" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386387 4763 scope.go:117] "RemoveContainer" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.386729 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": container with ID starting with 3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb not found: ID does not exist" containerID="3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386764 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb"} err="failed to get container status \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": rpc error: code = NotFound desc = could not find container \"3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb\": container with ID starting with 3c8a18008516cdc0f864653c98c804d9adb2d194b696ce78f82e1b1f416f0fdb not found: ID does not exist" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.386786 4763 scope.go:117] "RemoveContainer" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: E0131 15:25:56.387001 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": container with ID starting with 821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d not found: ID does not exist" containerID="821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d" Jan 31 15:25:56 crc kubenswrapper[4763]: I0131 15:25:56.387028 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d"} err="failed to get container status \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": rpc error: code = NotFound desc = could not find container \"821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d\": container with ID starting with 821a71c5343122b8b36f7588bf54af973ed519fa93bc250474d5bcb59d1f699d not found: ID does not exist" Jan 31 15:25:57 crc kubenswrapper[4763]: I0131 15:25:57.056988 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" path="/var/lib/kubelet/pods/cb8791e5-8470-4f7e-bf4b-ef5f031b179e/volumes" Jan 31 15:27:14 crc kubenswrapper[4763]: I0131 15:27:14.177481 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:27:14 crc kubenswrapper[4763]: I0131 15:27:14.178126 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:27:15 crc kubenswrapper[4763]: I0131 15:27:15.958679 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:15 crc kubenswrapper[4763]: E0131 15:27:15.959659 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-storage-0" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" Jan 31 15:27:15 crc kubenswrapper[4763]: I0131 15:27:15.999857 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.008865 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049817 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049917 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.049994 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050080 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7390eb43-2a86-4ad9-b504-6fca814daf1c\" (UID: \"7390eb43-2a86-4ad9-b504-6fca814daf1c\") " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050252 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache" (OuterVolumeSpecName: "cache") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050347 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock" (OuterVolumeSpecName: "lock") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050435 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.050453 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7390eb43-2a86-4ad9-b504-6fca814daf1c-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.058094 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.064948 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw" (OuterVolumeSpecName: "kube-api-access-bvzrw") pod "7390eb43-2a86-4ad9-b504-6fca814daf1c" (UID: "7390eb43-2a86-4ad9-b504-6fca814daf1c"). InnerVolumeSpecName "kube-api-access-bvzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.151504 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.151542 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvzrw\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-kube-api-access-bvzrw\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.162624 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:27:16 crc kubenswrapper[4763]: I0131 15:27:16.252560 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.005874 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.068891 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.079102 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.124184 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125033 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125058 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125076 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125083 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125098 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125104 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125111 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125116 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125130 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125135 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-utilities" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.125146 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125151 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="extract-content" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125282 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd6ec75a-40f2-4f7e-8c91-0a3d8db7ed74" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.125291 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8791e5-8470-4f7e-bf4b-ef5f031b179e" containerName="registry-server" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.128940 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.132780 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"combined-ca-bundle" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.132944 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.141735 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165432 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165532 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165750 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165817 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165877 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.165952 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.166018 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7390eb43-2a86-4ad9-b504-6fca814daf1c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267042 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267181 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267194 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.267239 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:17.767223852 +0000 UTC m=+1957.521962145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267444 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267474 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.267912 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268034 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268137 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268071 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.268156 4763 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") device mount path \"/mnt/openstack/pv05\"" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.284313 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.291992 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.293453 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.688188 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.690150 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.693090 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-internal-svc" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.694479 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-public-svc" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.709953 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776441 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776503 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776575 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776628 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776655 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776769 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776797 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776816 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776866 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776899 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.776914 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.776962 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:18.776937958 +0000 UTC m=+1958.531676261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.877969 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878037 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878064 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878129 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878158 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878195 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878255 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.878313 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.879092 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.879141 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879308 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879351 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: E0131 15:27:17.879472 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:18.379443672 +0000 UTC m=+1958.134182025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.882091 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.885200 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.886499 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.886760 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:17 crc kubenswrapper[4763]: I0131 15:27:17.920654 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:18 crc kubenswrapper[4763]: I0131 15:27:18.386084 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386351 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386405 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.386527 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:19.386491445 +0000 UTC m=+1959.141229788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: I0131 15:27:18.792770 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793138 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793203 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:18 crc kubenswrapper[4763]: E0131 15:27:18.793314 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:20.793279896 +0000 UTC m=+1960.548018229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: I0131 15:27:19.056486 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7390eb43-2a86-4ad9-b504-6fca814daf1c" path="/var/lib/kubelet/pods/7390eb43-2a86-4ad9-b504-6fca814daf1c/volumes" Jan 31 15:27:19 crc kubenswrapper[4763]: I0131 15:27:19.403977 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404127 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404384 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:19 crc kubenswrapper[4763]: E0131 15:27:19.404476 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:21.404445537 +0000 UTC m=+1961.159183870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: I0131 15:27:20.826140 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826469 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826511 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:20 crc kubenswrapper[4763]: E0131 15:27:20.826609 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:24.826576148 +0000 UTC m=+1964.581314481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: I0131 15:27:21.442580 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442784 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442814 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:21 crc kubenswrapper[4763]: E0131 15:27:21.442877 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:25.442860137 +0000 UTC m=+1965.197598430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: I0131 15:27:24.891811 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892074 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892092 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:24 crc kubenswrapper[4763]: E0131 15:27:24.892150 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:32.892130215 +0000 UTC m=+1972.646868508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: I0131 15:27:25.504003 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504248 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504382 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:25 crc kubenswrapper[4763]: E0131 15:27:25.504453 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:33.504433048 +0000 UTC m=+1973.259171361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:29 crc kubenswrapper[4763]: E0131 15:27:29.992671 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:27:30 crc kubenswrapper[4763]: I0131 15:27:30.120375 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:27:30 crc kubenswrapper[4763]: I0131 15:27:30.688288 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") pod \"swift-proxy-77c98d654c-29n65\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688532 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688570 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-77c98d654c-29n65: configmap "swift-ring-files" not found Jan 31 15:27:30 crc kubenswrapper[4763]: E0131 15:27:30.688661 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift podName:c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc nodeName:}" failed. No retries permitted until 2026-01-31 15:29:32.688631372 +0000 UTC m=+2092.443369695 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift") pod "swift-proxy-77c98d654c-29n65" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc") : configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: I0131 15:27:32.931929 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932168 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932427 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:32 crc kubenswrapper[4763]: E0131 15:27:32.932510 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:48.932485053 +0000 UTC m=+1988.687223386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: I0131 15:27:33.544210 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544476 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544534 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:33 crc kubenswrapper[4763]: E0131 15:27:33.544640 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:27:49.54461218 +0000 UTC m=+1989.299350513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:27:44 crc kubenswrapper[4763]: I0131 15:27:44.177977 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:27:44 crc kubenswrapper[4763]: I0131 15:27:44.178936 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:27:49 crc kubenswrapper[4763]: I0131 15:27:49.005887 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006253 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006763 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.006868 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:28:21.006836896 +0000 UTC m=+2020.761575229 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: I0131 15:27:49.616300 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616561 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616650 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:27:49 crc kubenswrapper[4763]: E0131 15:27:49.616831 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:28:21.616790585 +0000 UTC m=+2021.371528918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.177502 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.178266 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.178341 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.179479 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:28:14 crc kubenswrapper[4763]: I0131 15:28:14.179591 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" gracePeriod=600 Jan 31 15:28:15 crc kubenswrapper[4763]: I0131 15:28:15.368715 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5kfwr" podUID="d73a5142-56cf-4676-a6f1-a00868938c4d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988228 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" exitCode=0 Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988598 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11"} Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988629 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerStarted","Data":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} Jan 31 15:28:16 crc kubenswrapper[4763]: I0131 15:28:16.988649 4763 scope.go:117] "RemoveContainer" containerID="b4ec987776c686659c270c506ed84ad6a4491dfb7193164fe0f57c0a4ded9053" Jan 31 15:28:21 crc kubenswrapper[4763]: I0131 15:28:21.048259 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048413 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048674 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.048754 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift podName:b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96 nodeName:}" failed. No retries permitted until 2026-01-31 15:29:25.048734561 +0000 UTC m=+2084.803472854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift") pod "swift-storage-0" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96") : configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: I0131 15:28:21.655950 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656210 4763 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656233 4763 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj: configmap "swift-ring-files" not found Jan 31 15:28:21 crc kubenswrapper[4763]: E0131 15:28:21.656330 4763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift podName:28a15191-c413-47d3-bf30-00cfea074db4 nodeName:}" failed. No retries permitted until 2026-01-31 15:29:25.656303575 +0000 UTC m=+2085.411041908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift") pod "swift-proxy-5c474fc7f4-tg6pj" (UID: "28a15191-c413-47d3-bf30-00cfea074db4") : configmap "swift-ring-files" not found Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.634819 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.638863 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.642016 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.642442 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.669867 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779222 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779278 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779319 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779372 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779417 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.779454 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.880949 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881201 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881274 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881349 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881463 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.881554 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.882595 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.882679 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.883929 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.890673 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.891123 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.910258 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"swift-ring-rebalance-9rk9x\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.978661 4763 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-ngwjv" Jan 31 15:29:16 crc kubenswrapper[4763]: I0131 15:29:16.986961 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:17 crc kubenswrapper[4763]: I0131 15:29:17.401163 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:17 crc kubenswrapper[4763]: I0131 15:29:17.472554 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerStarted","Data":"d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d"} Jan 31 15:29:18 crc kubenswrapper[4763]: I0131 15:29:18.482859 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerStarted","Data":"6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184"} Jan 31 15:29:18 crc kubenswrapper[4763]: I0131 15:29:18.502291 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" podStartSLOduration=2.502265998 podStartE2EDuration="2.502265998s" podCreationTimestamp="2026-01-31 15:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:18.499506953 +0000 UTC m=+2078.254245246" watchObservedRunningTime="2026-01-31 15:29:18.502265998 +0000 UTC m=+2078.257004321" Jan 31 15:29:20 crc kubenswrapper[4763]: E0131 15:29:20.153240 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" Jan 31 15:29:20 crc kubenswrapper[4763]: I0131 15:29:20.497915 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:20 crc kubenswrapper[4763]: E0131 15:29:20.715016 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" Jan 31 15:29:21 crc kubenswrapper[4763]: I0131 15:29:21.504850 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:24 crc kubenswrapper[4763]: I0131 15:29:24.530061 4763 generic.go:334] "Generic (PLEG): container finished" podID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerID="6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184" exitCode=0 Jan 31 15:29:24 crc kubenswrapper[4763]: I0131 15:29:24.530146 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerDied","Data":"6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184"} Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.123731 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.141150 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"swift-storage-0\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.299429 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.733803 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.740549 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"swift-proxy-5c474fc7f4-tg6pj\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.773471 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.856007 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.938635 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939083 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939133 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939156 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939175 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.939218 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") pod \"52163549-6ae3-46bc-87bc-6f909ee6f511\" (UID: \"52163549-6ae3-46bc-87bc-6f909ee6f511\") " Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.940243 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.940345 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.946997 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785" (OuterVolumeSpecName: "kube-api-access-4j785") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "kube-api-access-4j785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958332 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts" (OuterVolumeSpecName: "scripts") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958604 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:25 crc kubenswrapper[4763]: I0131 15:29:25.958762 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "52163549-6ae3-46bc-87bc-6f909ee6f511" (UID: "52163549-6ae3-46bc-87bc-6f909ee6f511"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.006317 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041293 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/52163549-6ae3-46bc-87bc-6f909ee6f511-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041333 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041344 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041356 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/52163549-6ae3-46bc-87bc-6f909ee6f511-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041552 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52163549-6ae3-46bc-87bc-6f909ee6f511-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.041573 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j785\" (UniqueName: \"kubernetes.io/projected/52163549-6ae3-46bc-87bc-6f909ee6f511-kube-api-access-4j785\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.245226 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:26 crc kubenswrapper[4763]: W0131 15:29:26.252875 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28a15191_c413_47d3_bf30_00cfea074db4.slice/crio-5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe WatchSource:0}: Error finding container 5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe: Status 404 returned error can't find the container with id 5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548306 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548374 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548402 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.548423 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"08b94966ac20bf1259e091c08cf195b1a8050f7facc3b6925f12ae4f6758d415"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550900 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" event={"ID":"52163549-6ae3-46bc-87bc-6f909ee6f511","Type":"ContainerDied","Data":"d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550949 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9rk9x" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.550955 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d468a84447e352b3d831b84757002f0c6e26c89712b6a917b436c446766e4d6d" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.552516 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.552563 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe"} Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.639632 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.649005 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9rk9x"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.660519 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:26 crc kubenswrapper[4763]: E0131 15:29:26.660888 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.660912 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.661096 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" containerName="swift-ring-rebalance" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.661727 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.666499 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.673566 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.673597 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763210 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763248 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763286 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763314 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763349 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763375 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.763429 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865211 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865796 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.865929 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866015 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866137 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866287 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866338 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.866740 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.867074 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.867518 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.870751 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.871658 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.879369 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.884944 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"swift-ring-rebalance-hh8jm\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:26 crc kubenswrapper[4763]: I0131 15:29:26.991557 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.050834 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52163549-6ae3-46bc-87bc-6f909ee6f511" path="/var/lib/kubelet/pods/52163549-6ae3-46bc-87bc-6f909ee6f511/volumes" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.230973 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:27 crc kubenswrapper[4763]: W0131 15:29:27.239477 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb93fd42_19ef_474d_9a43_5f77f863f4f3.slice/crio-fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26 WatchSource:0}: Error finding container fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26: Status 404 returned error can't find the container with id fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26 Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563279 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563555 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563634 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563722 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.563793 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.565514 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerStarted","Data":"1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.565926 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.566060 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.567656 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerStarted","Data":"2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.567762 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerStarted","Data":"fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26"} Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.609315 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" podStartSLOduration=1.609300641 podStartE2EDuration="1.609300641s" podCreationTimestamp="2026-01-31 15:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:27.608979142 +0000 UTC m=+2087.363717435" watchObservedRunningTime="2026-01-31 15:29:27.609300641 +0000 UTC m=+2087.364038924" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.611218 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podStartSLOduration=130.611211293 podStartE2EDuration="2m10.611211293s" podCreationTimestamp="2026-01-31 15:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:27.590872574 +0000 UTC m=+2087.345610867" watchObservedRunningTime="2026-01-31 15:29:27.611211293 +0000 UTC m=+2087.365949576" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.757170 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.758435 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.779455 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884017 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884102 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.884155 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985293 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985376 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985421 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.985855 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:27 crc kubenswrapper[4763]: I0131 15:29:27.986077 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.003565 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"redhat-marketplace-zznq2\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.076151 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599662 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599916 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599939 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599949 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.599958 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462"} Jan 31 15:29:28 crc kubenswrapper[4763]: I0131 15:29:28.628255 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.620131 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerStarted","Data":"b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627643 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" exitCode=0 Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627731 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.627812 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerStarted","Data":"200089e5c6c719127b84991a83acfdb070ea832ba2581a5989e580c35ecbd770"} Jan 31 15:29:29 crc kubenswrapper[4763]: I0131 15:29:29.653351 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=132.653335943 podStartE2EDuration="2m12.653335943s" podCreationTimestamp="2026-01-31 15:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:29:29.648905783 +0000 UTC m=+2089.403644076" watchObservedRunningTime="2026-01-31 15:29:29.653335943 +0000 UTC m=+2089.408074236" Jan 31 15:29:30 crc kubenswrapper[4763]: I0131 15:29:30.640244 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" exitCode=0 Jan 31 15:29:30 crc kubenswrapper[4763]: I0131 15:29:30.640346 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7"} Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.022964 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.052186 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.129470 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:31 crc kubenswrapper[4763]: E0131 15:29:31.131636 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.649656 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.649663 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerStarted","Data":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.658490 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743342 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743437 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743498 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.743539 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") pod \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\" (UID: \"c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc\") " Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.744359 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.745403 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.745549 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.749896 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6" (OuterVolumeSpecName: "kube-api-access-fssq6") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "kube-api-access-fssq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.755849 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data" (OuterVolumeSpecName: "config-data") pod "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" (UID: "c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.846706 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.847038 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fssq6\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-kube-api-access-fssq6\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:31 crc kubenswrapper[4763]: I0131 15:29:31.847055 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656108 4763 generic.go:334] "Generic (PLEG): container finished" podID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerID="2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff" exitCode=0 Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656185 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerDied","Data":"2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff"} Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.656257 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-77c98d654c-29n65" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.675861 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zznq2" podStartSLOduration=4.213732692 podStartE2EDuration="5.675844232s" podCreationTimestamp="2026-01-31 15:29:27 +0000 UTC" firstStartedPulling="2026-01-31 15:29:29.629538471 +0000 UTC m=+2089.384276764" lastFinishedPulling="2026-01-31 15:29:31.091650001 +0000 UTC m=+2090.846388304" observedRunningTime="2026-01-31 15:29:31.676082051 +0000 UTC m=+2091.430820344" watchObservedRunningTime="2026-01-31 15:29:32.675844232 +0000 UTC m=+2092.430582525" Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.701717 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.714957 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-77c98d654c-29n65"] Jan 31 15:29:32 crc kubenswrapper[4763]: I0131 15:29:32.760985 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.051118 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc" path="/var/lib/kubelet/pods/c06e0e1f-0c47-4fe4-9fa4-f83c158d02fc/volumes" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.909320 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978191 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978270 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978305 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978344 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978391 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978460 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.978527 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") pod \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\" (UID: \"bb93fd42-19ef-474d-9a43-5f77f863f4f3\") " Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.979324 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.979500 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:33 crc kubenswrapper[4763]: I0131 15:29:33.984906 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s" (OuterVolumeSpecName: "kube-api-access-tpg4s") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "kube-api-access-tpg4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.000328 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.001087 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.008394 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts" (OuterVolumeSpecName: "scripts") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.013588 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bb93fd42-19ef-474d-9a43-5f77f863f4f3" (UID: "bb93fd42-19ef-474d-9a43-5f77f863f4f3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080346 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb93fd42-19ef-474d-9a43-5f77f863f4f3-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080379 4763 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080388 4763 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080399 4763 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080408 4763 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb93fd42-19ef-474d-9a43-5f77f863f4f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080416 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpg4s\" (UniqueName: \"kubernetes.io/projected/bb93fd42-19ef-474d-9a43-5f77f863f4f3-kube-api-access-tpg4s\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.080424 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb93fd42-19ef-474d-9a43-5f77f863f4f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.671759 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" event={"ID":"bb93fd42-19ef-474d-9a43-5f77f863f4f3","Type":"ContainerDied","Data":"fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26"} Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.672197 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb42295730d17767575eebb14b287990ae5d6c880a545500d832c4d6ef96ac26" Jan 31 15:29:34 crc kubenswrapper[4763]: I0131 15:29:34.671833 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-hh8jm" Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.547466 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591064 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591650 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" containerID="cri-o://b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.591964 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" containerID="cri-o://5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592147 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" containerID="cri-o://b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592129 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" containerID="cri-o://3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592200 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" containerID="cri-o://4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592239 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" containerID="cri-o://13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592256 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" containerID="cri-o://4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592236 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" containerID="cri-o://3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592309 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" containerID="cri-o://f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592337 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" containerID="cri-o://8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592346 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" containerID="cri-o://75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592386 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" containerID="cri-o://d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592400 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" containerID="cri-o://c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592422 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" containerID="cri-o://3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.592457 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" containerID="cri-o://27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.611362 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-hh8jm"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627026 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627257 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" containerID="cri-o://94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" gracePeriod=30 Jan 31 15:29:35 crc kubenswrapper[4763]: I0131 15:29:35.627380 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" containerID="cri-o://1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" gracePeriod=30 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.008081 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.129:8080/healthcheck\": dial tcp 10.217.0.129:8080: connect: connection refused" Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.008092 4763 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.129:8080/healthcheck\": dial tcp 10.217.0.129:8080: connect: connection refused" Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.691725 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692053 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692067 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692078 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692087 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692096 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692105 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692115 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692124 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.691748 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692185 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692212 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692133 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692253 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692272 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692284 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692293 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692224 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692344 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692367 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692381 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692393 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692403 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692415 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692426 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692438 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692449 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.692461 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694198 4763 generic.go:334] "Generic (PLEG): container finished" podID="28a15191-c413-47d3-bf30-00cfea074db4" containerID="1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694220 4763 generic.go:334] "Generic (PLEG): container finished" podID="28a15191-c413-47d3-bf30-00cfea074db4" containerID="94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" exitCode=0 Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694242 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.694263 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037"} Jan 31 15:29:36 crc kubenswrapper[4763]: I0131 15:29:36.958337 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028341 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028427 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028457 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028483 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028513 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028560 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.028611 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") pod \"28a15191-c413-47d3-bf30-00cfea074db4\" (UID: \"28a15191-c413-47d3-bf30-00cfea074db4\") " Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.029178 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.029341 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.036476 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k" (OuterVolumeSpecName: "kube-api-access-zrw7k") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "kube-api-access-zrw7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.040825 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.049632 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" path="/var/lib/kubelet/pods/bb93fd42-19ef-474d-9a43-5f77f863f4f3/volumes" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.070415 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.075434 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data" (OuterVolumeSpecName: "config-data") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.076001 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.093858 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "28a15191-c413-47d3-bf30-00cfea074db4" (UID: "28a15191-c413-47d3-bf30-00cfea074db4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130362 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130428 4763 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130449 4763 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130468 4763 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130484 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130500 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrw7k\" (UniqueName: \"kubernetes.io/projected/28a15191-c413-47d3-bf30-00cfea074db4-kube-api-access-zrw7k\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130518 4763 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a15191-c413-47d3-bf30-00cfea074db4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.130532 4763 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28a15191-c413-47d3-bf30-00cfea074db4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707323 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" event={"ID":"28a15191-c413-47d3-bf30-00cfea074db4","Type":"ContainerDied","Data":"5be2d540c55e3fd7015dc71d6c2acdb224aa3a06b89724aa15fe1f287322b4fe"} Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707409 4763 scope.go:117] "RemoveContainer" containerID="1524702ffca715bd0db7eccd119fb193efb8441cda7e530f3c83b895f51ec885" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.707481 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.746270 4763 scope.go:117] "RemoveContainer" containerID="94d6f7a85414be8a5e46bae79d955e31d2f5e10b15b2ca8ff97bde5b77d90037" Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.746747 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:37 crc kubenswrapper[4763]: I0131 15:29:37.755027 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-5c474fc7f4-tg6pj"] Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.076870 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.076934 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.120879 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.795523 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:38 crc kubenswrapper[4763]: I0131 15:29:38.876001 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:39 crc kubenswrapper[4763]: I0131 15:29:39.059953 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a15191-c413-47d3-bf30-00cfea074db4" path="/var/lib/kubelet/pods/28a15191-c413-47d3-bf30-00cfea074db4/volumes" Jan 31 15:29:40 crc kubenswrapper[4763]: I0131 15:29:40.737625 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zznq2" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" containerID="cri-o://3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" gracePeriod=2 Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.165043 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304198 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304250 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.304329 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") pod \"f4ae6311-229c-409c-9ef7-42c47c4d009f\" (UID: \"f4ae6311-229c-409c-9ef7-42c47c4d009f\") " Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.307515 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities" (OuterVolumeSpecName: "utilities") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.337913 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4" (OuterVolumeSpecName: "kube-api-access-jw9k4") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "kube-api-access-jw9k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.347145 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4ae6311-229c-409c-9ef7-42c47c4d009f" (UID: "f4ae6311-229c-409c-9ef7-42c47c4d009f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406475 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw9k4\" (UniqueName: \"kubernetes.io/projected/f4ae6311-229c-409c-9ef7-42c47c4d009f-kube-api-access-jw9k4\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406524 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.406537 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4ae6311-229c-409c-9ef7-42c47c4d009f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.747977 4763 generic.go:334] "Generic (PLEG): container finished" podID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" exitCode=0 Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748070 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748111 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zznq2" event={"ID":"f4ae6311-229c-409c-9ef7-42c47c4d009f","Type":"ContainerDied","Data":"200089e5c6c719127b84991a83acfdb070ea832ba2581a5989e580c35ecbd770"} Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748139 4763 scope.go:117] "RemoveContainer" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.748934 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zznq2" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.773706 4763 scope.go:117] "RemoveContainer" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.790924 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.795235 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zznq2"] Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.822971 4763 scope.go:117] "RemoveContainer" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854197 4763 scope.go:117] "RemoveContainer" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.854874 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": container with ID starting with 3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219 not found: ID does not exist" containerID="3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854934 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219"} err="failed to get container status \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": rpc error: code = NotFound desc = could not find container \"3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219\": container with ID starting with 3ca5b36f4e58e9485c8e082bd015e8867a1a68d6a5c37c51474b44ec17a46219 not found: ID does not exist" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.854963 4763 scope.go:117] "RemoveContainer" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.855395 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": container with ID starting with b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7 not found: ID does not exist" containerID="b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855433 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7"} err="failed to get container status \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": rpc error: code = NotFound desc = could not find container \"b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7\": container with ID starting with b73bd3f195ed1616bd190447e3ea1f637ad869c5bba2cb9216db2c389f8d9bc7 not found: ID does not exist" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855458 4763 scope.go:117] "RemoveContainer" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: E0131 15:29:41.855818 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": container with ID starting with 789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d not found: ID does not exist" containerID="789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d" Jan 31 15:29:41 crc kubenswrapper[4763]: I0131 15:29:41.855843 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d"} err="failed to get container status \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": rpc error: code = NotFound desc = could not find container \"789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d\": container with ID starting with 789655d3a53368d5a5cb6cfafe1d691c90ec8cdff2c02a3f6f0d8d2c2021826d not found: ID does not exist" Jan 31 15:29:43 crc kubenswrapper[4763]: I0131 15:29:43.053053 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" path="/var/lib/kubelet/pods/f4ae6311-229c-409c-9ef7-42c47c4d009f/volumes" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.135441 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136306 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136324 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136361 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-utilities" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136370 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-utilities" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136389 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136394 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136404 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136409 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136419 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-content" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136424 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="extract-content" Jan 31 15:30:00 crc kubenswrapper[4763]: E0131 15:30:00.136431 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136436 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136581 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb93fd42-19ef-474d-9a43-5f77f863f4f3" containerName="swift-ring-rebalance" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136597 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ae6311-229c-409c-9ef7-42c47c4d009f" containerName="registry-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136609 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-httpd" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.136618 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="28a15191-c413-47d3-bf30-00cfea074db4" containerName="proxy-server" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.137110 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.141552 4763 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.141979 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.148580 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288225 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288339 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.288507 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390144 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390254 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.390347 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.391112 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.396484 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.417681 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"collect-profiles-29497890-dfwzn\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.492578 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.902124 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn"] Jan 31 15:30:00 crc kubenswrapper[4763]: I0131 15:30:00.920427 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerStarted","Data":"16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8"} Jan 31 15:30:01 crc kubenswrapper[4763]: I0131 15:30:01.929449 4763 generic.go:334] "Generic (PLEG): container finished" podID="87cc2c04-9338-4eae-a93c-6956d090e393" containerID="f53cb550e4f664c73596f1b533610c1a7d9c3e449ac8df96716136f175c798bb" exitCode=0 Jan 31 15:30:01 crc kubenswrapper[4763]: I0131 15:30:01.929501 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerDied","Data":"f53cb550e4f664c73596f1b533610c1a7d9c3e449ac8df96716136f175c798bb"} Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.246438 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336015 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336575 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.336678 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.337054 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") pod \"87cc2c04-9338-4eae-a93c-6956d090e393\" (UID: \"87cc2c04-9338-4eae-a93c-6956d090e393\") " Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.337720 4763 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cc2c04-9338-4eae-a93c-6956d090e393-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.341791 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.341917 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm" (OuterVolumeSpecName: "kube-api-access-gjwgm") pod "87cc2c04-9338-4eae-a93c-6956d090e393" (UID: "87cc2c04-9338-4eae-a93c-6956d090e393"). InnerVolumeSpecName "kube-api-access-gjwgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.439658 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjwgm\" (UniqueName: \"kubernetes.io/projected/87cc2c04-9338-4eae-a93c-6956d090e393-kube-api-access-gjwgm\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.439713 4763 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cc2c04-9338-4eae-a93c-6956d090e393-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.949830 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.949868 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497890-dfwzn" event={"ID":"87cc2c04-9338-4eae-a93c-6956d090e393","Type":"ContainerDied","Data":"16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8"} Jan 31 15:30:03 crc kubenswrapper[4763]: I0131 15:30:03.950848 4763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16123395064e18d5cf7cee452356c445960dfae0998242b8495d6fbbb011bed8" Jan 31 15:30:04 crc kubenswrapper[4763]: I0131 15:30:04.336552 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 15:30:04 crc kubenswrapper[4763]: I0131 15:30:04.347240 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-zn989"] Jan 31 15:30:05 crc kubenswrapper[4763]: I0131 15:30:05.049789 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c40a34-73d2-4a28-b2bd-31e19e6361d2" path="/var/lib/kubelet/pods/20c40a34-73d2-4a28-b2bd-31e19e6361d2/volumes" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.000582 4763 generic.go:334] "Generic (PLEG): container finished" podID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerID="b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" exitCode=137 Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.000649 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5"} Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.078020 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178338 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178473 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178523 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178567 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178645 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.178688 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") pod \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\" (UID: \"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96\") " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179001 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock" (OuterVolumeSpecName: "lock") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179224 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache" (OuterVolumeSpecName: "cache") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179817 4763 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.179841 4763 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.183543 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm" (OuterVolumeSpecName: "kube-api-access-sb2lm") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "kube-api-access-sb2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.186829 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.199819 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.281994 4763 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.282342 4763 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.282356 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb2lm\" (UniqueName: \"kubernetes.io/projected/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-kube-api-access-sb2lm\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.322492 4763 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.383943 4763 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.428304 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" (UID: "b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:30:06 crc kubenswrapper[4763]: I0131 15:30:06.485680 4763 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018154 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96","Type":"ContainerDied","Data":"08b94966ac20bf1259e091c08cf195b1a8050f7facc3b6925f12ae4f6758d415"} Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018246 4763 scope.go:117] "RemoveContainer" containerID="b0b563b544c427ad8b7a14d7adac600e852739031807938b6bfb905e2e57f1b5" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.018250 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.050857 4763 scope.go:117] "RemoveContainer" containerID="4ce013d55c9bbe30b3a876684f92e64239ef5b5ebc001cea9bdff39c5628c354" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.059387 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.062612 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.074719 4763 scope.go:117] "RemoveContainer" containerID="13645e6ffc4344b9af2d1ceda74f8853a0dba80f284062e6d075c13aefd2c584" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.090986 4763 scope.go:117] "RemoveContainer" containerID="f7a6591e71b1676140102a34e2d10a5e2e57e3288fef923f32e0ff238738e50b" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.108104 4763 scope.go:117] "RemoveContainer" containerID="75623ddf20f8fda7b57c75c03bc9e8bada9ac4dad0d734f2ba1d8790ab36d2c1" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.128971 4763 scope.go:117] "RemoveContainer" containerID="d475c3ef66bf23eca234aaaa87dccd531be71d1b54e59d9e34a453e486d2922d" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.144967 4763 scope.go:117] "RemoveContainer" containerID="3c538f45263d816240971928fab851f939c124cdc4848caf9f7b76f74cad7462" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.159985 4763 scope.go:117] "RemoveContainer" containerID="5cc3982d159833064b5c1cdf1058f4d310c549bc30d9583666bb8c50d8deb789" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.176310 4763 scope.go:117] "RemoveContainer" containerID="3334bd67954ed15598914f9f4fbef205014fe259154cc379e034e929afe48ba6" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.194650 4763 scope.go:117] "RemoveContainer" containerID="3faecda72dcb03b6c1659c54cec9556dc478a9f14238f362f4c1b75ec2935478" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.215549 4763 scope.go:117] "RemoveContainer" containerID="4ff6b19267726798e5bafd41f33c57adc7cb799816fce89a0072cb78d8920082" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.239548 4763 scope.go:117] "RemoveContainer" containerID="8a5f0dcb0aa81eaac0bb8540fa03cbb7e7d3594bfb5d9f0ba94b71a916c7c113" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.256989 4763 scope.go:117] "RemoveContainer" containerID="c6776da1a16e11de5e0299bf5a94671ceb2fc44db56762423b817128616ad972" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.280190 4763 scope.go:117] "RemoveContainer" containerID="27af2de829fbb2ed8d9f250d2e6bfbc1c3ba8467ddb49414f8464eb2d5f6f4ee" Jan 31 15:30:07 crc kubenswrapper[4763]: I0131 15:30:07.298887 4763 scope.go:117] "RemoveContainer" containerID="b7a53ccb6ba4646a1c596b8440d55d97505667eefcbaff4f73ec11d24991e295" Jan 31 15:30:09 crc kubenswrapper[4763]: I0131 15:30:09.052871 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" path="/var/lib/kubelet/pods/b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96/volumes" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.066218 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067233 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067260 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067286 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067298 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067309 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067320 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067335 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067347 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067372 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067382 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067396 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067406 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067424 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067434 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067445 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067456 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067474 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067484 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067500 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067512 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067531 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067541 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067559 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067569 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067579 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067588 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067601 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067611 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067629 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067640 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: E0131 15:30:29.067652 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.067662 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068026 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068052 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068066 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="swift-recon-cron" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068081 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068092 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-updater" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068108 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-server" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068118 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="rsync" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068130 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-reaper" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068140 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068151 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068167 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068182 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="account-replicator" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068198 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068213 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="container-auditor" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068228 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d19fa1-48d6-4f5f-867f-f4ceefb2fe96" containerName="object-expirer" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.068239 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cc2c04-9338-4eae-a93c-6956d090e393" containerName="collect-profiles" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.069298 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.073638 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76lr"/"kube-root-ca.crt" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.073730 4763 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76lr"/"openshift-service-ca.crt" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.089865 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.123808 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.123908 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225349 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225434 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.225973 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.251605 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"must-gather-dplx7\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.385049 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:30:29 crc kubenswrapper[4763]: I0131 15:30:29.793356 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:30:30 crc kubenswrapper[4763]: I0131 15:30:30.235875 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"e86aab5efd3d42c42e635de7de003266abb97814e03efe21e4dff1b53d4c5441"} Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.169418 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.172243 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.182082 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288750 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288885 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.288973 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390262 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390401 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390489 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.390745 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.391019 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.416770 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"redhat-operators-2v8cv\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.505933 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:33 crc kubenswrapper[4763]: I0131 15:30:33.721663 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:33 crc kubenswrapper[4763]: W0131 15:30:33.732872 4763 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4b48077_151c_45b6_bc68_224b69ea1311.slice/crio-19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517 WatchSource:0}: Error finding container 19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517: Status 404 returned error can't find the container with id 19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517 Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.275768 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f" exitCode=0 Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.275883 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.276135 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.280871 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.280929 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerStarted","Data":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} Jan 31 15:30:34 crc kubenswrapper[4763]: I0131 15:30:34.321046 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76lr/must-gather-dplx7" podStartSLOduration=1.5278549959999999 podStartE2EDuration="5.321017967s" podCreationTimestamp="2026-01-31 15:30:29 +0000 UTC" firstStartedPulling="2026-01-31 15:30:29.805318981 +0000 UTC m=+2149.560057274" lastFinishedPulling="2026-01-31 15:30:33.598481952 +0000 UTC m=+2153.353220245" observedRunningTime="2026-01-31 15:30:34.314183452 +0000 UTC m=+2154.068921785" watchObservedRunningTime="2026-01-31 15:30:34.321017967 +0000 UTC m=+2154.075756300" Jan 31 15:30:35 crc kubenswrapper[4763]: I0131 15:30:35.288894 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833"} Jan 31 15:30:36 crc kubenswrapper[4763]: I0131 15:30:36.296607 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833" exitCode=0 Jan 31 15:30:36 crc kubenswrapper[4763]: I0131 15:30:36.296651 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833"} Jan 31 15:30:37 crc kubenswrapper[4763]: I0131 15:30:37.305534 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerStarted","Data":"8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a"} Jan 31 15:30:37 crc kubenswrapper[4763]: I0131 15:30:37.328299 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2v8cv" podStartSLOduration=1.895086238 podStartE2EDuration="4.328282006s" podCreationTimestamp="2026-01-31 15:30:33 +0000 UTC" firstStartedPulling="2026-01-31 15:30:34.277515794 +0000 UTC m=+2154.032254087" lastFinishedPulling="2026-01-31 15:30:36.710711562 +0000 UTC m=+2156.465449855" observedRunningTime="2026-01-31 15:30:37.32511859 +0000 UTC m=+2157.079856883" watchObservedRunningTime="2026-01-31 15:30:37.328282006 +0000 UTC m=+2157.083020299" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.176842 4763 scope.go:117] "RemoveContainer" containerID="f8f3a2ee5fed8706cd33e083136df0eff635e736dd9b8ba1a9267757cea26ad5" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.506982 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.507037 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:43 crc kubenswrapper[4763]: I0131 15:30:43.564862 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.177034 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.177476 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:30:44 crc kubenswrapper[4763]: I0131 15:30:44.438720 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.151753 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.152227 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2v8cv" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" containerID="cri-o://8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" gracePeriod=2 Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.396669 4763 generic.go:334] "Generic (PLEG): container finished" podID="d4b48077-151c-45b6-bc68-224b69ea1311" containerID="8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" exitCode=0 Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.396730 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a"} Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.561326 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.583262 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.583473 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.584304 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities" (OuterVolumeSpecName: "utilities") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.584615 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") pod \"d4b48077-151c-45b6-bc68-224b69ea1311\" (UID: \"d4b48077-151c-45b6-bc68-224b69ea1311\") " Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.585065 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.592133 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd" (OuterVolumeSpecName: "kube-api-access-t55pd") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "kube-api-access-t55pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.686985 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t55pd\" (UniqueName: \"kubernetes.io/projected/d4b48077-151c-45b6-bc68-224b69ea1311-kube-api-access-t55pd\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.711893 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4b48077-151c-45b6-bc68-224b69ea1311" (UID: "d4b48077-151c-45b6-bc68-224b69ea1311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:30:47 crc kubenswrapper[4763]: I0131 15:30:47.787991 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4b48077-151c-45b6-bc68-224b69ea1311-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406487 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2v8cv" event={"ID":"d4b48077-151c-45b6-bc68-224b69ea1311","Type":"ContainerDied","Data":"19b4c4ad9d2f69e4cf695f14ec510d00e0e9f6b9c213dc65e9fc4fe6eee14517"} Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406540 4763 scope.go:117] "RemoveContainer" containerID="8e258f359d556ec52df9314f5c61020226ed81faa7151c3a364d4c38912f446a" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.406588 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2v8cv" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.437446 4763 scope.go:117] "RemoveContainer" containerID="7aa5884b469812be642dcb2acf228213ccf6e2d399c7aacdf4b9d0cec4f26833" Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.442871 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.460938 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2v8cv"] Jan 31 15:30:48 crc kubenswrapper[4763]: I0131 15:30:48.474390 4763 scope.go:117] "RemoveContainer" containerID="a70ef95aa92cbd197490f6845d4fc54c5e33b134d5f67705df1f3bf63c69e11f" Jan 31 15:30:49 crc kubenswrapper[4763]: I0131 15:30:49.050914 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" path="/var/lib/kubelet/pods/d4b48077-151c-45b6-bc68-224b69ea1311/volumes" Jan 31 15:31:14 crc kubenswrapper[4763]: I0131 15:31:14.176903 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:31:14 crc kubenswrapper[4763]: I0131 15:31:14.177411 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.171385 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.321172 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.331590 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.372175 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.525585 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.536266 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/extract/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.548578 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bv44ck_82458dee-ae6f-46c9-ac1b-745146c8b9bf/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.709373 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.811179 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.812100 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:17 crc kubenswrapper[4763]: I0131 15:31:17.860919 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.056367 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.056566 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/extract/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.058498 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e5907csbr_b6835994-86f5-4950-b010-780530fceffe/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.222776 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.384256 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.413007 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.420772 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.602165 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.603162 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/extract/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.641073 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efvqx9f_6fb80892-b089-4dff-baa8-44ffdf6b9b84/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.750661 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.941368 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:18 crc kubenswrapper[4763]: I0131 15:31:18.954655 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.000493 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.193961 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.215329 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/pull/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.253593 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b9b155cc2d0d314117c89cae85b0b2db7ad3d44a1432b4112bbeddf36fg2k8k_7b615184-cd97-4133-b2e4-fc44e41d1e6b/extract/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.439803 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-index-2w984_ef84b681-2ea6-4684-84c0-6d452a5b47df/registry-server/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.778082 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.988924 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:19 crc kubenswrapper[4763]: I0131 15:31:19.991024 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.012760 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.184802 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.243825 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.261533 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576x29rb_3636515d-8655-48d7-b0f6-54e4c6635f1c/extract/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.415472 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.638311 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.643406 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.654890 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.844639 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/util/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.873227 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/extract/0.log" Jan 31 15:31:20 crc kubenswrapper[4763]: I0131 15:31:20.915942 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40j6g9s_50493718-9240-44a6-bb1a-4c6c97473f2d/pull/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.100501 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5b76796566-wfzb5_ff757490-bd0f-4140-9f70-e5ec9d26353f/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.193499 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-9vcjd_df73235a-c7ce-449c-b163-341974166624/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.328639 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-njgcq_191c97ac-f003-4a51-8f06-395adf3ac8a7/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.368109 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7598465c56-xt6m7_970b855e-e278-4e6b-b9ba-733f8f798f59/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.524600 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-68956c85f5-mrnqc_30bcffc2-0054-475e-af66-74b73ec95edb/manager/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.602479 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-d2rtv_29673dd0-5315-4de5-bbc4-d8deb8581b9d/registry-server/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.744484 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-2ltrp_8225c1b7-e70c-4eac-8c03-c85f86ccba6b/operator/0.log" Jan 31 15:31:21 crc kubenswrapper[4763]: I0131 15:31:21.870364 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-l9x4g_6fa47f40-fce4-4e57-aebb-3313c4c996dd/registry-server/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.074373 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-77769db8d5-gb8pv_42b142bb-6946-4933-841b-33c9fc9899b2/manager/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.105972 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-h5chr_2c571391-06de-46b1-8932-99d44a63dc42/registry-server/0.log" Jan 31 15:31:22 crc kubenswrapper[4763]: I0131 15:31:22.375758 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-56656dfbf6-dtcnh_8edc4c7b-acb9-46fd-8f5b-3f74b3a8fbb5/manager/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.271106 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dncp9_a7826828-7856-44a4-be9f-f1a939950c3e/control-plane-machine-set-operator/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.406963 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bwc2g_5d9ac26c-eb66-4772-b7ee-a6b646092c4b/kube-rbac-proxy/0.log" Jan 31 15:31:37 crc kubenswrapper[4763]: I0131 15:31:37.444740 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bwc2g_5d9ac26c-eb66-4772-b7ee-a6b646092c4b/machine-api-operator/0.log" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.177399 4763 patch_prober.go:28] interesting pod/machine-config-daemon-9wp2x container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.177980 4763 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178024 4763 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178547 4763 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.178599 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerName="machine-config-daemon" containerID="cri-o://53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" gracePeriod=600 Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.786996 4763 generic.go:334] "Generic (PLEG): container finished" podID="9d1f3628-a7fe-4094-a313-96c0469fcf78" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" exitCode=0 Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.787075 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" event={"ID":"9d1f3628-a7fe-4094-a313-96c0469fcf78","Type":"ContainerDied","Data":"53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb"} Jan 31 15:31:44 crc kubenswrapper[4763]: I0131 15:31:44.787343 4763 scope.go:117] "RemoveContainer" containerID="2bfabe5df54337c4196a35b7009ba75d2a72e0f1ccf13fd0226987ae1c4b7c11" Jan 31 15:31:44 crc kubenswrapper[4763]: E0131 15:31:44.807442 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:31:45 crc kubenswrapper[4763]: I0131 15:31:45.794853 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:31:45 crc kubenswrapper[4763]: E0131 15:31:45.795117 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:31:57 crc kubenswrapper[4763]: I0131 15:31:57.042787 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:31:57 crc kubenswrapper[4763]: E0131 15:31:57.043791 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:06 crc kubenswrapper[4763]: I0131 15:32:06.793438 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-f4wjv_30f91c96-0c0b-4426-986d-715d11a222b3/kube-rbac-proxy/0.log" Jan 31 15:32:06 crc kubenswrapper[4763]: I0131 15:32:06.941352 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-f4wjv_30f91c96-0c0b-4426-986d-715d11a222b3/controller/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.020014 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.222033 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.224199 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.252171 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.252238 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.406783 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.416883 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.423151 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.427118 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.591847 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.599827 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/controller/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.614006 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-reloader/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.628132 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/cp-frr-files/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.748751 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/frr-metrics/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.781051 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/kube-rbac-proxy/0.log" Jan 31 15:32:07 crc kubenswrapper[4763]: I0131 15:32:07.857488 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/kube-rbac-proxy-frr/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.038407 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wwdkt_d9c89dc4-758c-449e-bd6c-76f27ee6ecec/frr-k8s-webhook-server/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.057828 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/reloader/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.250429 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64b6b97b4f-gbf25_5a42a356-dc67-417c-b291-c079e880aa79/manager/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.254120 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ft4k2_35cf5cc4-3973-4d1c-b52a-804293bb1f25/frr/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.407264 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6448f7d6f6-k9gcj_911c2e7f-03a5-49a2-8db7-5c63c602ef29/webhook-server/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.420530 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kf27r_8fe7a08d-0d51-422f-9477-932841b77158/kube-rbac-proxy/0.log" Jan 31 15:32:08 crc kubenswrapper[4763]: I0131 15:32:08.607354 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kf27r_8fe7a08d-0d51-422f-9477-932841b77158/speaker/0.log" Jan 31 15:32:12 crc kubenswrapper[4763]: I0131 15:32:12.041611 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:12 crc kubenswrapper[4763]: E0131 15:32:12.042132 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:22 crc kubenswrapper[4763]: I0131 15:32:22.942359 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-697dc779fb-sgr8v_5d7c9f19-bf9f-4c6c-a113-a10d6be02620/barbican-api/0.log" Jan 31 15:32:22 crc kubenswrapper[4763]: I0131 15:32:22.987789 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-697dc779fb-sgr8v_5d7c9f19-bf9f-4c6c-a113-a10d6be02620/barbican-api-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.097957 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-db-sync-q2gqt_d76ca4ae-ac08-455d-af41-ec673a980e8e/barbican-db-sync/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.150664 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-5b8c7cdd44-cxsxt_ae9bd061-c69e-4ff5-acd4-2b953c4b1657/barbican-keystone-listener/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.268474 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-5b8c7cdd44-cxsxt_ae9bd061-c69e-4ff5-acd4-2b953c4b1657/barbican-keystone-listener-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.307055 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-5477f7cb8f-8rssm_49dd2bcf-ceb5-4df8-8a24-eec8de703f88/barbican-worker/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.372405 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-5477f7cb8f-8rssm_49dd2bcf-ceb5-4df8-8a24-eec8de703f88/barbican-worker-log/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.745569 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/mysql-bootstrap/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.920614 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/mysql-bootstrap/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.924914 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-7659668474-6698l_791f5002-b2b5-488c-99c8-5ed511cffed2/keystone-api/0.log" Jan 31 15:32:23 crc kubenswrapper[4763]: I0131 15:32:23.925998 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_dc474c59-7d29-4ce0-86c8-07d96c462b4e/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.172903 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.349153 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.396331 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_e5a89037-391b-4806-8f01-09ddd6a4d13e/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.544534 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.731592 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/mysql-bootstrap/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.756093 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_cd0d5ccb-1d59-428e-9a53-17427cd0e5dc/galera/0.log" Jan 31 15:32:24 crc kubenswrapper[4763]: I0131 15:32:24.921262 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/setup-container/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.108954 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/rabbitmq/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.182911 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_dee0d43f-8ff0-4094-9833-92cda38ee182/setup-container/0.log" Jan 31 15:32:25 crc kubenswrapper[4763]: I0131 15:32:25.608060 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_memcached-0_ecb69fa0-2df1-477e-a257-05e0f1dd1c76/memcached/0.log" Jan 31 15:32:27 crc kubenswrapper[4763]: I0131 15:32:27.042017 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:27 crc kubenswrapper[4763]: E0131 15:32:27.042677 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.288054 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.431838 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.463173 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.493020 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.624317 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/pull/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.653358 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/util/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.705348 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcmdpj9_0f29e959-ca5d-4407-ac1d-4ce7001597aa/extract/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.796244 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.931682 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.933214 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:38 crc kubenswrapper[4763]: I0131 15:32:38.955806 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.092176 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.098240 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.292470 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.415061 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-frpn9_5b2a0eaa-d0f3-4243-bcf3-dae180a73ab3/registry-server/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.448017 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.498393 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.527154 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.628246 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-utilities/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.631060 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/extract-content/0.log" Jan 31 15:32:39 crc kubenswrapper[4763]: I0131 15:32:39.795306 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gg2dq_38baa8fd-7b8e-4c7b-ac03-d739f10d242a/marketplace-operator/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.013770 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.041295 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:40 crc kubenswrapper[4763]: E0131 15:32:40.041575 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.101034 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.106556 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.175686 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.195136 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v59tf_e2f6ea13-f993-4138-b5d5-a549e9aae21b/registry-server/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.390593 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.391257 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.512213 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-znznv_d877abcd-9d8f-4597-b41c-4026d954cc62/registry-server/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.553340 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.734188 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.734216 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.744826 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.899456 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-utilities/0.log" Jan 31 15:32:40 crc kubenswrapper[4763]: I0131 15:32:40.905643 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/extract-content/0.log" Jan 31 15:32:41 crc kubenswrapper[4763]: I0131 15:32:41.448167 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-gshr8_751420d5-1809-406a-bef8-8e4015d9763b/registry-server/0.log" Jan 31 15:32:54 crc kubenswrapper[4763]: I0131 15:32:54.041999 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:32:54 crc kubenswrapper[4763]: E0131 15:32:54.042672 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:06 crc kubenswrapper[4763]: I0131 15:33:06.043168 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:06 crc kubenswrapper[4763]: E0131 15:33:06.044109 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:19 crc kubenswrapper[4763]: I0131 15:33:19.042282 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:19 crc kubenswrapper[4763]: E0131 15:33:19.043320 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:32 crc kubenswrapper[4763]: I0131 15:33:32.042420 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:32 crc kubenswrapper[4763]: E0131 15:33:32.043179 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:47 crc kubenswrapper[4763]: I0131 15:33:47.043061 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:33:47 crc kubenswrapper[4763]: E0131 15:33:47.043855 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.789555 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" exitCode=0 Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.789680 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76lr/must-gather-dplx7" event={"ID":"ac81e32a-c558-4275-8b3e-448c797bb0a9","Type":"ContainerDied","Data":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} Jan 31 15:33:54 crc kubenswrapper[4763]: I0131 15:33:54.790571 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:33:55 crc kubenswrapper[4763]: I0131 15:33:55.050425 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/gather/0.log" Jan 31 15:34:00 crc kubenswrapper[4763]: I0131 15:34:00.042050 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:00 crc kubenswrapper[4763]: E0131 15:34:00.043325 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.909763 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.910770 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-n76lr/must-gather-dplx7" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" containerID="cri-o://f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" gracePeriod=2 Jan 31 15:34:01 crc kubenswrapper[4763]: I0131 15:34:01.914609 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76lr/must-gather-dplx7"] Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.275068 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/copy/0.log" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.275837 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.423897 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") pod \"ac81e32a-c558-4275-8b3e-448c797bb0a9\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.424012 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") pod \"ac81e32a-c558-4275-8b3e-448c797bb0a9\" (UID: \"ac81e32a-c558-4275-8b3e-448c797bb0a9\") " Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.434877 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4" (OuterVolumeSpecName: "kube-api-access-h4jp4") pod "ac81e32a-c558-4275-8b3e-448c797bb0a9" (UID: "ac81e32a-c558-4275-8b3e-448c797bb0a9"). InnerVolumeSpecName "kube-api-access-h4jp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.516067 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ac81e32a-c558-4275-8b3e-448c797bb0a9" (UID: "ac81e32a-c558-4275-8b3e-448c797bb0a9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.525547 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jp4\" (UniqueName: \"kubernetes.io/projected/ac81e32a-c558-4275-8b3e-448c797bb0a9-kube-api-access-h4jp4\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.525584 4763 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ac81e32a-c558-4275-8b3e-448c797bb0a9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.849525 4763 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76lr_must-gather-dplx7_ac81e32a-c558-4275-8b3e-448c797bb0a9/copy/0.log" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850164 4763 generic.go:334] "Generic (PLEG): container finished" podID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" exitCode=143 Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850233 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76lr/must-gather-dplx7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.850244 4763 scope.go:117] "RemoveContainer" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.869405 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925019 4763 scope.go:117] "RemoveContainer" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: E0131 15:34:02.925522 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": container with ID starting with f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f not found: ID does not exist" containerID="f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925568 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f"} err="failed to get container status \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": rpc error: code = NotFound desc = could not find container \"f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f\": container with ID starting with f64bd2bbb09dcacba8b86de4bf4d9ce5de51d4c83a47af9ccb5face75600d22f not found: ID does not exist" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.925598 4763 scope.go:117] "RemoveContainer" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: E0131 15:34:02.926657 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": container with ID starting with d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7 not found: ID does not exist" containerID="d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7" Jan 31 15:34:02 crc kubenswrapper[4763]: I0131 15:34:02.926712 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7"} err="failed to get container status \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": rpc error: code = NotFound desc = could not find container \"d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7\": container with ID starting with d21f71fa91317e79020828c6b9443709046ecbfb96dece8bedb5494c9e3e01f7 not found: ID does not exist" Jan 31 15:34:03 crc kubenswrapper[4763]: I0131 15:34:03.049112 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" path="/var/lib/kubelet/pods/ac81e32a-c558-4275-8b3e-448c797bb0a9/volumes" Jan 31 15:34:13 crc kubenswrapper[4763]: I0131 15:34:13.042871 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:13 crc kubenswrapper[4763]: E0131 15:34:13.044224 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:28 crc kubenswrapper[4763]: I0131 15:34:28.041547 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:28 crc kubenswrapper[4763]: E0131 15:34:28.042240 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:39 crc kubenswrapper[4763]: I0131 15:34:39.041674 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:39 crc kubenswrapper[4763]: E0131 15:34:39.042738 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:34:52 crc kubenswrapper[4763]: I0131 15:34:52.041578 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:34:52 crc kubenswrapper[4763]: E0131 15:34:52.043360 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:05 crc kubenswrapper[4763]: I0131 15:35:05.041957 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:05 crc kubenswrapper[4763]: E0131 15:35:05.042661 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:16 crc kubenswrapper[4763]: I0131 15:35:16.042212 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:16 crc kubenswrapper[4763]: E0131 15:35:16.043087 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:29 crc kubenswrapper[4763]: I0131 15:35:29.043120 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:29 crc kubenswrapper[4763]: E0131 15:35:29.044929 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:43 crc kubenswrapper[4763]: I0131 15:35:43.323307 4763 scope.go:117] "RemoveContainer" containerID="2bda736f39ac7738f7e5a8baee5357943f15ea2730a9ddbd6eb3bf5d9f6ff0ff" Jan 31 15:35:43 crc kubenswrapper[4763]: I0131 15:35:43.350867 4763 scope.go:117] "RemoveContainer" containerID="6c1accfe18801fff7cebc38d02887e93a257691b6f17a553032f19d001556184" Jan 31 15:35:44 crc kubenswrapper[4763]: I0131 15:35:44.042001 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:44 crc kubenswrapper[4763]: E0131 15:35:44.042418 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:35:57 crc kubenswrapper[4763]: I0131 15:35:57.041822 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:35:57 crc kubenswrapper[4763]: E0131 15:35:57.043955 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.591186 4763 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592230 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-utilities" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592255 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-utilities" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592289 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592301 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592317 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592329 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592362 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592374 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: E0131 15:36:02.592385 4763 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-content" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592396 4763 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="extract-content" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592622 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="copy" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592649 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b48077-151c-45b6-bc68-224b69ea1311" containerName="registry-server" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.592678 4763 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac81e32a-c558-4275-8b3e-448c797bb0a9" containerName="gather" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.603896 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.635160 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721054 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721219 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.721269 4763 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822304 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822385 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822439 4763 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.822898 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.823022 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.857527 4763 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"certified-operators-99pvx\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:02 crc kubenswrapper[4763]: I0131 15:36:02.938267 4763 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.366898 4763 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877543 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" exitCode=0 Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877646 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d"} Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.877718 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"9f7563e8815117479b3cf1c11a99a4e602bb7abc60c65aa9020683cd323e6e67"} Jan 31 15:36:03 crc kubenswrapper[4763]: I0131 15:36:03.879741 4763 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:36:04 crc kubenswrapper[4763]: I0131 15:36:04.890026 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} Jan 31 15:36:05 crc kubenswrapper[4763]: I0131 15:36:05.898578 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" exitCode=0 Jan 31 15:36:05 crc kubenswrapper[4763]: I0131 15:36:05.898722 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} Jan 31 15:36:06 crc kubenswrapper[4763]: I0131 15:36:06.908499 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerStarted","Data":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} Jan 31 15:36:06 crc kubenswrapper[4763]: I0131 15:36:06.937380 4763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-99pvx" podStartSLOduration=2.526957426 podStartE2EDuration="4.937362935s" podCreationTimestamp="2026-01-31 15:36:02 +0000 UTC" firstStartedPulling="2026-01-31 15:36:03.87948459 +0000 UTC m=+2483.634222883" lastFinishedPulling="2026-01-31 15:36:06.289890089 +0000 UTC m=+2486.044628392" observedRunningTime="2026-01-31 15:36:06.934018306 +0000 UTC m=+2486.688756629" watchObservedRunningTime="2026-01-31 15:36:06.937362935 +0000 UTC m=+2486.692101228" Jan 31 15:36:11 crc kubenswrapper[4763]: I0131 15:36:11.052778 4763 scope.go:117] "RemoveContainer" containerID="53a8735f6bb745f9d67b948ceb4e96485f2a675ae5c6c50c0c1aac80291308fb" Jan 31 15:36:11 crc kubenswrapper[4763]: E0131 15:36:11.053599 4763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9wp2x_openshift-machine-config-operator(9d1f3628-a7fe-4094-a313-96c0469fcf78)\"" pod="openshift-machine-config-operator/machine-config-daemon-9wp2x" podUID="9d1f3628-a7fe-4094-a313-96c0469fcf78" Jan 31 15:36:12 crc kubenswrapper[4763]: I0131 15:36:12.938765 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:12 crc kubenswrapper[4763]: I0131 15:36:12.938893 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.018043 4763 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.062515 4763 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:13 crc kubenswrapper[4763]: I0131 15:36:13.254768 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:14 crc kubenswrapper[4763]: I0131 15:36:14.966940 4763 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-99pvx" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerName="registry-server" containerID="cri-o://3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" gracePeriod=2 Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.425111 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527733 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527810 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.527858 4763 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") pod \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\" (UID: \"a0dd2603-0018-4872-95f3-a5dd2f85e8c5\") " Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.529227 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities" (OuterVolumeSpecName: "utilities") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.536385 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv" (OuterVolumeSpecName: "kube-api-access-7phzv") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "kube-api-access-7phzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.629666 4763 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.629713 4763 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7phzv\" (UniqueName: \"kubernetes.io/projected/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-kube-api-access-7phzv\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.977469 4763 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0dd2603-0018-4872-95f3-a5dd2f85e8c5" (UID: "a0dd2603-0018-4872-95f3-a5dd2f85e8c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978632 4763 generic.go:334] "Generic (PLEG): container finished" podID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" exitCode=0 Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978732 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978778 4763 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-99pvx" event={"ID":"a0dd2603-0018-4872-95f3-a5dd2f85e8c5","Type":"ContainerDied","Data":"9f7563e8815117479b3cf1c11a99a4e602bb7abc60c65aa9020683cd323e6e67"} Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.978807 4763 scope.go:117] "RemoveContainer" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:15 crc kubenswrapper[4763]: I0131 15:36:15.979058 4763 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-99pvx" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.009947 4763 scope.go:117] "RemoveContainer" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.021507 4763 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.042770 4763 scope.go:117] "RemoveContainer" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.042985 4763 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0dd2603-0018-4872-95f3-a5dd2f85e8c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.050594 4763 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-99pvx"] Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.066429 4763 scope.go:117] "RemoveContainer" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067040 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": container with ID starting with 3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f not found: ID does not exist" containerID="3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067083 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f"} err="failed to get container status \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": rpc error: code = NotFound desc = could not find container \"3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f\": container with ID starting with 3158a1cb8f431ebb28e5bade6c51bb8837ebab86d65c3d6db8026c55b094845f not found: ID does not exist" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067107 4763 scope.go:117] "RemoveContainer" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067381 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": container with ID starting with 9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff not found: ID does not exist" containerID="9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067411 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff"} err="failed to get container status \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": rpc error: code = NotFound desc = could not find container \"9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff\": container with ID starting with 9aaf54c27cda76a1695560d8719f9acc95e5837e22485e9da72f5f1575464cff not found: ID does not exist" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067435 4763 scope.go:117] "RemoveContainer" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: E0131 15:36:16.067651 4763 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": container with ID starting with 1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d not found: ID does not exist" containerID="1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d" Jan 31 15:36:16 crc kubenswrapper[4763]: I0131 15:36:16.067684 4763 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d"} err="failed to get container status \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": rpc error: code = NotFound desc = could not find container \"1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d\": container with ID starting with 1af8de92490840d8ec77947409013083536dae7bbd79919db4620f18e8fdfc1d not found: ID does not exist" Jan 31 15:36:17 crc kubenswrapper[4763]: I0131 15:36:17.048330 4763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0dd2603-0018-4872-95f3-a5dd2f85e8c5" path="/var/lib/kubelet/pods/a0dd2603-0018-4872-95f3-a5dd2f85e8c5/volumes"